Test Report: Hyper-V_Windows 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (37/200)

Order failed test Duration
33 TestAddons/parallel/Registry 119.1
55 TestErrorSpam/setup 173.06
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 28.91
80 TestFunctional/serial/ExtraConfig 330.55
81 TestFunctional/serial/ComponentHealth 120.62
84 TestFunctional/serial/InvalidService 4.22
90 TestFunctional/parallel/StatusCmd 226.43
94 TestFunctional/parallel/ServiceCmdConnect 174.17
96 TestFunctional/parallel/PersistentVolumeClaim 486.34
100 TestFunctional/parallel/MySQL 112.51
106 TestFunctional/parallel/NodeLabels 360.93
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 7.99
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 4.21
121 TestFunctional/parallel/ServiceCmd/DeployApp 2.15
122 TestFunctional/parallel/ServiceCmd/List 6.69
123 TestFunctional/parallel/ServiceCmd/JSONOutput 6.65
124 TestFunctional/parallel/ServiceCmd/HTTPS 6.64
125 TestFunctional/parallel/ServiceCmd/Format 6.44
126 TestFunctional/parallel/ServiceCmd/URL 6.48
132 TestFunctional/parallel/DockerEnv/powershell 470.51
136 TestFunctional/parallel/ImageCommands/ImageListShort 60.09
137 TestFunctional/parallel/ImageCommands/ImageListTable 60.23
138 TestFunctional/parallel/ImageCommands/ImageListJson 60.07
139 TestFunctional/parallel/ImageCommands/ImageListYaml 60.05
140 TestFunctional/parallel/ImageCommands/ImageBuild 120.3
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 86.78
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 120.3
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 120.59
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 120.36
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.33
157 TestMultiControlPlane/serial/PingHostFromPods 64.76
164 TestMultiControlPlane/serial/RestartSecondaryNode 163.34
220 TestMultiNode/serial/PingHostFrom2Pods 51.54
227 TestMultiNode/serial/RestartKeepsNodes 539.21
228 TestMultiNode/serial/DeleteNode 51.5
240 TestKubernetesUpgrade 10800.357
253 TestNoKubernetes/serial/StartWithK8s 310.81
x
+
TestAddons/parallel/Registry (119.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 4.3046ms
I0923 11:25:14.389377    3844 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2x52r" [56baf2a1-7092-48e6-bb7f-3ff56671ab95] Running
I0923 11:25:14.399016    3844 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 11:25:14.399016    3844 kapi.go:107] duration metric: took 9.638ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007122s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lxdlz" [8fba184d-ef78-4494-8f07-f1c7b3232682] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0056932s
addons_test.go:338: (dbg) Run:  kubectl --context addons-526200 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-526200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-526200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.178721s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-526200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 ip
addons_test.go:357: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 ip: (2.3240087s)
2024/09/23 11:26:28 [DEBUG] GET http://172.19.158.244:5000
addons_test.go:386: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable registry --alsologtostderr -v=1
addons_test.go:386: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable registry --alsologtostderr -v=1: (14.5323554s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-526200 -n addons-526200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-526200 -n addons-526200: (11.4240459s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 logs -n 25: (7.8950615s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p download-only-291700                                                                     | download-only-291700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC | 23 Sep 24 11:08 UTC |
	| delete  | -p download-only-668400                                                                     | download-only-668400 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC | 23 Sep 24 11:08 UTC |
	| delete  | -p download-only-291700                                                                     | download-only-291700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC | 23 Sep 24 11:08 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-627900 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC |                     |
	|         | binary-mirror-627900                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:54606                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-627900                                                                     | binary-mirror-627900 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC | 23 Sep 24 11:08 UTC |
	| addons  | disable dashboard -p                                                                        | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC |                     |
	|         | addons-526200                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC |                     |
	|         | addons-526200                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-526200 --wait=true                                                                | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC | 23 Sep 24 11:15 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-526200 addons disable                                                                | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:16 UTC | 23 Sep 24 11:16 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-526200 addons disable                                                                | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |                   |         |                     |                     |
	| ssh     | addons-526200 ssh curl -s                                                                   | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
	| ip      | addons-526200 ip                                                                            | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	| addons  | addons-526200 addons disable                                                                | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:25 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:26 UTC |
	|         | addons-526200                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-526200 addons disable                                                                | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:26 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-526200 addons                                                                        | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:25 UTC | 23 Sep 24 11:26 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-526200 addons                                                                        | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-526200 addons                                                                        | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ip      | addons-526200 ip                                                                            | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	| ssh     | addons-526200 ssh cat                                                                       | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | /opt/local-path-provisioner/pvc-ec44c691-0529-4f83-b313-a77082d0c7d8_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | addons-526200 addons disable                                                                | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | -p addons-526200                                                                            |                      |                   |         |                     |                     |
	| addons  | addons-526200 addons disable                                                                | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC | 23 Sep 24 11:26 UTC |
	|         | addons-526200                                                                               |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-526200        | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:26 UTC |                     |
	|         | -p addons-526200                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:08:44
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:08:44.251039    2660 out.go:345] Setting OutFile to fd 792 ...
	I0923 11:08:44.303103    2660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:08:44.303103    2660 out.go:358] Setting ErrFile to fd 784...
	I0923 11:08:44.303103    2660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:08:44.320245    2660 out.go:352] Setting JSON to false
	I0923 11:08:44.323297    2660 start.go:129] hostinfo: {"hostname":"minikube5","uptime":485700,"bootTime":1726604023,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:08:44.323297    2660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:08:44.329599    2660 out.go:177] * [addons-526200] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:08:44.332845    2660 notify.go:220] Checking for updates...
	I0923 11:08:44.333335    2660 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:08:44.335098    2660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:08:44.338097    2660 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:08:44.339838    2660 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:08:44.341837    2660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:08:44.345034    2660 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:08:49.163741    2660 out.go:177] * Using the hyperv driver based on user configuration
	I0923 11:08:49.178125    2660 start.go:297] selected driver: hyperv
	I0923 11:08:49.178125    2660 start.go:901] validating driver "hyperv" against <nil>
	I0923 11:08:49.178948    2660 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:08:49.221035    2660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:08:49.221035    2660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:08:49.222119    2660 cni.go:84] Creating CNI manager for ""
	I0923 11:08:49.222212    2660 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:08:49.222240    2660 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 11:08:49.222378    2660 start.go:340] cluster config:
	{Name:addons-526200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-526200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:08:49.222378    2660 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:08:49.365305    2660 out.go:177] * Starting "addons-526200" primary control-plane node in "addons-526200" cluster
	I0923 11:08:49.368476    2660 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:08:49.368725    2660 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:08:49.368779    2660 cache.go:56] Caching tarball of preloaded images
	I0923 11:08:49.369122    2660 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:08:49.369295    2660 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:08:49.369733    2660 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\config.json ...
	I0923 11:08:49.369949    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\config.json: {Name:mk9653704dcfbc3c952938ceba2c45cdd2ff302f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:08:49.371457    2660 start.go:360] acquireMachinesLock for addons-526200: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:08:49.371765    2660 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-526200"
	I0923 11:08:49.371957    2660 start.go:93] Provisioning new machine with config: &{Name:addons-526200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:addons-526200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:08:49.372080    2660 start.go:125] createHost starting for "" (driver="hyperv")
	I0923 11:08:49.374093    2660 out.go:235] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 11:08:49.374848    2660 start.go:159] libmachine.API.Create for "addons-526200" (driver="hyperv")
	I0923 11:08:49.374848    2660 client.go:168] LocalClient.Create starting
	I0923 11:08:49.375644    2660 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 11:08:49.484965    2660 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 11:08:49.632975    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 11:08:51.446182    2660 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 11:08:51.446182    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:08:51.446182    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 11:08:52.941927    2660 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 11:08:52.941927    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:08:52.942006    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 11:08:54.249378    2660 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 11:08:54.249378    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:08:54.250178    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 11:08:57.349393    2660 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 11:08:57.349393    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:08:57.351347    2660 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 11:08:57.742033    2660 main.go:141] libmachine: Creating SSH key...
	I0923 11:08:57.984313    2660 main.go:141] libmachine: Creating VM...
	I0923 11:08:57.984313    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 11:09:00.435513    2660 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 11:09:00.435513    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:00.435678    2660 main.go:141] libmachine: Using switch "Default Switch"
	I0923 11:09:00.435861    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 11:09:01.983702    2660 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 11:09:01.984323    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:01.984323    2660 main.go:141] libmachine: Creating VHD
	I0923 11:09:01.984323    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 11:09:05.217118    2660 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 30A21CBC-C70D-4B2D-8453-B582E795E57F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 11:09:05.218194    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:05.218260    2660 main.go:141] libmachine: Writing magic tar header
	I0923 11:09:05.218260    2660 main.go:141] libmachine: Writing SSH key tar header
	I0923 11:09:05.227412    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 11:09:08.107935    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:08.107935    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:08.108269    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\disk.vhd' -SizeBytes 20000MB
	I0923 11:09:10.371377    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:10.371377    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:10.371377    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-526200 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0923 11:09:13.506217    2660 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-526200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 11:09:13.506497    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:13.506608    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-526200 -DynamicMemoryEnabled $false
	I0923 11:09:15.461634    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:15.462625    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:15.462625    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-526200 -Count 2
	I0923 11:09:17.286110    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:17.286110    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:17.286726    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-526200 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\boot2docker.iso'
	I0923 11:09:19.460726    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:19.460726    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:19.461711    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-526200 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\disk.vhd'
	I0923 11:09:21.779675    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:21.780007    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:21.780007    2660 main.go:141] libmachine: Starting VM...
	I0923 11:09:21.780007    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-526200
	I0923 11:09:24.591667    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:24.592130    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:24.592130    2660 main.go:141] libmachine: Waiting for host to start...
	I0923 11:09:24.592130    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:09:26.610767    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:09:26.610767    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:26.610767    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:09:28.759760    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:28.760750    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:29.761924    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:09:31.661919    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:09:31.661919    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:31.661919    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:09:33.843885    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:33.843885    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:34.844472    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:09:36.702890    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:09:36.702890    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:36.702890    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:09:38.868869    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:38.868869    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:39.869831    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:09:41.758665    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:09:41.759440    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:41.759506    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:09:43.892121    2660 main.go:141] libmachine: [stdout =====>] : 
	I0923 11:09:43.892121    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:44.892896    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:09:46.820668    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:09:46.820668    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:46.820862    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:09:49.051737    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:09:49.051737    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:49.051737    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:09:50.915267    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:09:50.915267    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:50.915913    2660 machine.go:93] provisionDockerMachine start ...
	I0923 11:09:50.916074    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:09:52.706606    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:09:52.706606    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:52.706606    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:09:54.861473    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:09:54.861473    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:54.866595    2660 main.go:141] libmachine: Using SSH client type: native
	I0923 11:09:54.878752    2660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.158.244 22 <nil> <nil>}
	I0923 11:09:54.878819    2660 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:09:55.006953    2660 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 11:09:55.006953    2660 buildroot.go:166] provisioning hostname "addons-526200"
	I0923 11:09:55.006953    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:09:56.813653    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:09:56.813653    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:56.814782    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:09:58.967314    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:09:58.967314    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:09:58.973617    2660 main.go:141] libmachine: Using SSH client type: native
	I0923 11:09:58.974198    2660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.158.244 22 <nil> <nil>}
	I0923 11:09:58.974198    2660 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-526200 && echo "addons-526200" | sudo tee /etc/hostname
	I0923 11:09:59.120936    2660 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-526200
	
	I0923 11:09:59.121115    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:00.940586    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:00.940586    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:00.940586    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:03.183943    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:03.184584    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:03.188430    2660 main.go:141] libmachine: Using SSH client type: native
	I0923 11:10:03.189085    2660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.158.244 22 <nil> <nil>}
	I0923 11:10:03.189085    2660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-526200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-526200/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-526200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:10:03.336656    2660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:10:03.336755    2660 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 11:10:03.336860    2660 buildroot.go:174] setting up certificates
	I0923 11:10:03.336860    2660 provision.go:84] configureAuth start
	I0923 11:10:03.336972    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:05.235979    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:05.236594    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:05.236742    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:07.430814    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:07.430814    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:07.430961    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:09.276800    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:09.276800    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:09.276800    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:11.497469    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:11.497469    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:11.497469    2660 provision.go:143] copyHostCerts
	I0923 11:10:11.498383    2660 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 11:10:11.500242    2660 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:10:11.501571    2660 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 11:10:11.502685    2660 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-526200 san=[127.0.0.1 172.19.158.244 addons-526200 localhost minikube]
	I0923 11:10:11.674760    2660 provision.go:177] copyRemoteCerts
	I0923 11:10:11.682493    2660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:10:11.683045    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:13.530696    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:13.530696    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:13.530779    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:15.737262    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:15.737262    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:15.738660    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:10:15.850693    2660 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1678422s)
	I0923 11:10:15.851462    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 11:10:15.895162    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:10:15.934183    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:10:15.975147    2660 provision.go:87] duration metric: took 12.6374331s to configureAuth
	I0923 11:10:15.975147    2660 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:10:15.975685    2660 config.go:182] Loaded profile config "addons-526200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:10:15.975817    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:17.802396    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:17.802420    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:17.802480    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:19.996511    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:19.996511    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:20.001561    2660 main.go:141] libmachine: Using SSH client type: native
	I0923 11:10:20.001561    2660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.158.244 22 <nil> <nil>}
	I0923 11:10:20.002084    2660 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:10:20.136241    2660 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 11:10:20.136276    2660 buildroot.go:70] root file system type: tmpfs
	I0923 11:10:20.136726    2660 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:10:20.136784    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:22.048326    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:22.048482    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:22.048482    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:24.300134    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:24.300134    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:24.305375    2660 main.go:141] libmachine: Using SSH client type: native
	I0923 11:10:24.306101    2660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.158.244 22 <nil> <nil>}
	I0923 11:10:24.306101    2660 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:10:24.468038    2660 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:10:24.468038    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:26.328497    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:26.328774    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:26.328774    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:28.530277    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:28.530277    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:28.536292    2660 main.go:141] libmachine: Using SSH client type: native
	I0923 11:10:28.536924    2660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.158.244 22 <nil> <nil>}
	I0923 11:10:28.536924    2660 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:10:30.646900    2660 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 11:10:30.647046    2660 machine.go:96] duration metric: took 39.7283028s to provisionDockerMachine
	I0923 11:10:30.647074    2660 client.go:171] duration metric: took 1m41.2653595s to LocalClient.Create
	I0923 11:10:30.647074    2660 start.go:167] duration metric: took 1m41.2653883s to libmachine.API.Create "addons-526200"
	I0923 11:10:30.647156    2660 start.go:293] postStartSetup for "addons-526200" (driver="hyperv")
	I0923 11:10:30.647188    2660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:10:30.655726    2660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:10:30.655726    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:32.529081    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:32.529081    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:32.529309    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:34.726321    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:34.726395    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:34.726395    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:10:34.836325    2660 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1803164s)
	I0923 11:10:34.848337    2660 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:10:34.855249    2660 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:10:34.855249    2660 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 11:10:34.855249    2660 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 11:10:34.855918    2660 start.go:296] duration metric: took 4.2083542s for postStartSetup
	I0923 11:10:34.857680    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:36.690602    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:36.690892    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:36.690983    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:38.885175    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:38.885175    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:38.885499    2660 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\config.json ...
	I0923 11:10:38.887581    2660 start.go:128] duration metric: took 1m49.5081054s to createHost
	I0923 11:10:38.887581    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:40.718288    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:40.718288    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:40.718892    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:42.887720    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:42.887720    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:42.892349    2660 main.go:141] libmachine: Using SSH client type: native
	I0923 11:10:42.892733    2660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.158.244 22 <nil> <nil>}
	I0923 11:10:42.892733    2660 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:10:43.033974    2660 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727089843.242022137
	
	I0923 11:10:43.033974    2660 fix.go:216] guest clock: 1727089843.242022137
	I0923 11:10:43.034097    2660 fix.go:229] Guest: 2024-09-23 11:10:43.242022137 +0000 UTC Remote: 2024-09-23 11:10:38.8875813 +0000 UTC m=+114.703363901 (delta=4.354440837s)
	I0923 11:10:43.034255    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:44.884241    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:44.884241    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:44.884793    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:47.044811    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:47.044811    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:47.049500    2660 main.go:141] libmachine: Using SSH client type: native
	I0923 11:10:47.049889    2660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.158.244 22 <nil> <nil>}
	I0923 11:10:47.049983    2660 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727089843
	I0923 11:10:47.191925    2660 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 11:10:43 UTC 2024
	
	I0923 11:10:47.192049    2660 fix.go:236] clock set: Mon Sep 23 11:10:43 UTC 2024
	 (err=<nil>)
	I0923 11:10:47.192121    2660 start.go:83] releasing machines lock for "addons-526200", held for 1m57.8123999s
	I0923 11:10:47.192603    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:49.031051    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:49.031051    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:49.031791    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:51.170107    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:51.170107    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:51.173262    2660 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:10:51.173409    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:51.180537    2660 ssh_runner.go:195] Run: cat /version.json
	I0923 11:10:51.180537    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:10:53.064406    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:53.064670    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:53.064670    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:53.065353    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:10:53.065353    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:53.065353    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:10:55.352202    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:55.352536    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:55.353050    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:10:55.374895    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:10:55.375430    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:10:55.375839    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:10:55.452237    2660 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.2786859s)
	W0923 11:10:55.452350    2660 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:10:55.469128    2660 ssh_runner.go:235] Completed: cat /version.json: (4.2883022s)
	I0923 11:10:55.478183    2660 ssh_runner.go:195] Run: systemctl --version
	I0923 11:10:55.495796    2660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:10:55.503737    2660 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:10:55.516324    2660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:10:55.550170    2660 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 11:10:55.550170    2660 start.go:495] detecting cgroup driver to use...
	I0923 11:10:55.550170    2660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0923 11:10:55.568925    2660 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 11:10:55.568925    2660 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:10:55.592890    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:10:55.619154    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:10:55.637007    2660 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:10:55.647488    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:10:55.671332    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:10:55.698662    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:10:55.729201    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:10:55.757979    2660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:10:55.783262    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:10:55.812596    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:10:55.837257    2660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:10:55.863539    2660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:10:55.879730    2660 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 11:10:55.891848    2660 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 11:10:55.919489    2660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:10:55.944103    2660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:10:56.115790    2660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:10:56.148141    2660 start.go:495] detecting cgroup driver to use...
	I0923 11:10:56.159416    2660 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:10:56.192710    2660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:10:56.217838    2660 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:10:56.253928    2660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:10:56.285920    2660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:10:56.315795    2660 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 11:10:56.377567    2660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:10:56.397863    2660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:10:56.439786    2660 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:10:56.453320    2660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:10:56.468585    2660 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:10:56.508213    2660 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:10:56.694308    2660 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:10:56.855462    2660 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:10:56.856036    2660 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:10:56.895917    2660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:10:57.069085    2660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:10:59.579258    2660 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5099145s)
	I0923 11:10:59.588245    2660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 11:10:59.618264    2660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:10:59.651462    2660 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 11:10:59.824000    2660 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 11:11:00.011110    2660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:11:00.196848    2660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 11:11:00.235093    2660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:11:00.265446    2660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:11:00.448299    2660 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 11:11:00.548751    2660 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 11:11:00.559930    2660 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 11:11:00.568052    2660 start.go:563] Will wait 60s for crictl version
	I0923 11:11:00.579333    2660 ssh_runner.go:195] Run: which crictl
	I0923 11:11:00.596751    2660 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:11:00.648947    2660 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 11:11:00.659064    2660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:11:00.697117    2660 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:11:00.728298    2660 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 11:11:00.728298    2660 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 11:11:00.731769    2660 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 11:11:00.731769    2660 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 11:11:00.731769    2660 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 11:11:00.731769    2660 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 11:11:00.733440    2660 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 11:11:00.734374    2660 ip.go:214] interface addr: 172.19.144.1/20
	I0923 11:11:00.741735    2660 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 11:11:00.748055    2660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:11:00.770352    2660 kubeadm.go:883] updating cluster {Name:addons-526200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:addons-526200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.158.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:11:00.770650    2660 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:11:00.779772    2660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:11:00.801484    2660 docker.go:685] Got preloaded images: 
	I0923 11:11:00.801587    2660 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0923 11:11:00.813089    2660 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 11:11:00.842871    2660 ssh_runner.go:195] Run: which lz4
	I0923 11:11:00.857100    2660 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 11:11:00.863428    2660 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 11:11:00.863606    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0923 11:11:02.245041    2660 docker.go:649] duration metric: took 1.3959787s to copy over tarball
	I0923 11:11:02.253868    2660 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 11:11:07.188720    2660 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (4.9338795s)
	I0923 11:11:07.188793    2660 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 11:11:07.248473    2660 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 11:11:07.264946    2660 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0923 11:11:07.307462    2660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:11:07.481574    2660 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:11:13.332557    2660 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.8505888s)
	I0923 11:11:13.344123    2660 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:11:13.368405    2660 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 11:11:13.368511    2660 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:11:13.368582    2660 kubeadm.go:934] updating node { 172.19.158.244 8443 v1.31.1 docker true true} ...
	I0923 11:11:13.368582    2660 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-526200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.158.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-526200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:11:13.378268    2660 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 11:11:13.439424    2660 cni.go:84] Creating CNI manager for ""
	I0923 11:11:13.439424    2660 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:11:13.439539    2660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:11:13.439637    2660 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.158.244 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-526200 NodeName:addons-526200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.158.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.158.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:11:13.440082    2660 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.158.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-526200"
	  kubeletExtraArgs:
	    node-ip: 172.19.158.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.158.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:11:13.451861    2660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:11:13.468213    2660 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:11:13.478353    2660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:11:13.494246    2660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0923 11:11:13.522227    2660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:11:13.552790    2660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0923 11:11:13.593046    2660 ssh_runner.go:195] Run: grep 172.19.158.244	control-plane.minikube.internal$ /etc/hosts
	I0923 11:11:13.597626    2660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.158.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:11:13.627721    2660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:11:13.797837    2660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:11:13.822921    2660 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200 for IP: 172.19.158.244
	I0923 11:11:13.823095    2660 certs.go:194] generating shared ca certs ...
	I0923 11:11:13.823095    2660 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:13.823583    2660 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 11:11:14.016421    2660 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt ...
	I0923 11:11:14.017421    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt: {Name:mkecc83abf7dbcd2f2b0fd63bac36f2a7fe554cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.018440    2660 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key ...
	I0923 11:11:14.018440    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key: {Name:mk56e2872d5c5070a04729e59e76e7398d15f15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.019642    2660 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 11:11:14.114927    2660 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0923 11:11:14.114927    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkfcb9723e08b8d76b8a2e73084c13f930548396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.115929    2660 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key ...
	I0923 11:11:14.115929    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkd23bfd48ce10457a367dee40c81533c5cc7b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.116772    2660 certs.go:256] generating profile certs ...
	I0923 11:11:14.117867    2660 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\client.key
	I0923 11:11:14.118537    2660 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\client.crt with IP's: []
	I0923 11:11:14.245953    2660 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\client.crt ...
	I0923 11:11:14.245953    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\client.crt: {Name:mk2b85e8b88d3b0bba9f0a54dd19b90513482d6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.247746    2660 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\client.key ...
	I0923 11:11:14.247746    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\client.key: {Name:mk9404aaa17f67de370075a854bb4b6667f2d603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.249116    2660 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.key.4a608754
	I0923 11:11:14.249374    2660 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.crt.4a608754 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.158.244]
	I0923 11:11:14.565266    2660 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.crt.4a608754 ...
	I0923 11:11:14.565266    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.crt.4a608754: {Name:mkf2c92f1dc3e8a596e334f75292b8dc084d59fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.566203    2660 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.key.4a608754 ...
	I0923 11:11:14.566203    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.key.4a608754: {Name:mkea91ebea77f7175aa501b8c13e96d306b42ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.567750    2660 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.crt.4a608754 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.crt
	I0923 11:11:14.578537    2660 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.key.4a608754 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.key
	I0923 11:11:14.581804    2660 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\proxy-client.key
	I0923 11:11:14.581804    2660 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\proxy-client.crt with IP's: []
	I0923 11:11:14.785696    2660 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\proxy-client.crt ...
	I0923 11:11:14.785696    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\proxy-client.crt: {Name:mk8cd405380d03e4f49729386b0a45963726cf24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.786557    2660 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\proxy-client.key ...
	I0923 11:11:14.786557    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\proxy-client.key: {Name:mkd07a6bad3265d93e9b18d7637061c4ff65463e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:14.798703    2660 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 11:11:14.799429    2660 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 11:11:14.799429    2660 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 11:11:14.800111    2660 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 11:11:14.800905    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:11:14.845339    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 11:11:14.885820    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:11:14.932629    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 11:11:14.971752    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 11:11:15.010407    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:11:15.050746    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:11:15.094606    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-526200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:11:15.135005    2660 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:11:15.176470    2660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:11:15.218534    2660 ssh_runner.go:195] Run: openssl version
	I0923 11:11:15.239057    2660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:11:15.267264    2660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:11:15.273172    2660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:11:15.282285    2660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:11:15.300643    2660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:11:15.328154    2660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:11:15.334947    2660 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:11:15.335042    2660 kubeadm.go:392] StartCluster: {Name:addons-526200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-526200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.158.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:11:15.341752    2660 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 11:11:15.372107    2660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:11:15.396013    2660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:11:15.418987    2660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:11:15.432317    2660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:11:15.432348    2660 kubeadm.go:157] found existing configuration files:
	
	I0923 11:11:15.440177    2660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:11:15.453057    2660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:11:15.464014    2660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:11:15.492313    2660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:11:15.508755    2660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:11:15.516839    2660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:11:15.541972    2660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:11:15.557553    2660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:11:15.565861    2660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:11:15.593559    2660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:11:15.608570    2660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:11:15.617625    2660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:11:15.632808    2660 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 11:11:15.687460    2660 kubeadm.go:310] W0923 11:11:15.898382    1743 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:11:15.688657    2660 kubeadm.go:310] W0923 11:11:15.899353    1743 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:11:15.816354    2660 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:11:27.157097    2660 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:11:27.157290    2660 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:11:27.157435    2660 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:11:27.157652    2660 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:11:27.158022    2660 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:11:27.158140    2660 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:11:27.161030    2660 out.go:235]   - Generating certificates and keys ...
	I0923 11:11:27.161225    2660 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:11:27.161430    2660 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:11:27.161612    2660 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:11:27.161764    2660 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:11:27.161888    2660 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:11:27.162002    2660 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:11:27.162089    2660 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:11:27.162089    2660 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-526200 localhost] and IPs [172.19.158.244 127.0.0.1 ::1]
	I0923 11:11:27.162089    2660 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:11:27.162767    2660 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-526200 localhost] and IPs [172.19.158.244 127.0.0.1 ::1]
	I0923 11:11:27.162947    2660 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:11:27.163223    2660 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:11:27.163321    2660 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:11:27.163321    2660 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:11:27.163321    2660 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:11:27.163321    2660 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:11:27.163321    2660 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:11:27.163891    2660 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:11:27.163926    2660 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:11:27.163926    2660 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:11:27.163926    2660 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:11:27.167858    2660 out.go:235]   - Booting up control plane ...
	I0923 11:11:27.168530    2660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:11:27.168530    2660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:11:27.168530    2660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:11:27.168530    2660 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:11:27.168530    2660 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:11:27.168530    2660 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:11:27.169505    2660 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:11:27.169505    2660 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:11:27.169505    2660 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001277831s
	I0923 11:11:27.169505    2660 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:11:27.169505    2660 kubeadm.go:310] [api-check] The API server is healthy after 6.00203784s
	I0923 11:11:27.170505    2660 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:11:27.170505    2660 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:11:27.170505    2660 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:11:27.170505    2660 kubeadm.go:310] [mark-control-plane] Marking the node addons-526200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:11:27.170505    2660 kubeadm.go:310] [bootstrap-token] Using token: pbhzlg.1izy3culshbgxmlh
	I0923 11:11:27.174314    2660 out.go:235]   - Configuring RBAC rules ...
	I0923 11:11:27.174314    2660 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:11:27.174314    2660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:11:27.175369    2660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:11:27.175369    2660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:11:27.175369    2660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:11:27.175369    2660 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:11:27.175369    2660 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:11:27.176367    2660 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:11:27.176367    2660 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:11:27.176367    2660 kubeadm.go:310] 
	I0923 11:11:27.176367    2660 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:11:27.176367    2660 kubeadm.go:310] 
	I0923 11:11:27.176367    2660 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:11:27.176367    2660 kubeadm.go:310] 
	I0923 11:11:27.176367    2660 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:11:27.177015    2660 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:11:27.177159    2660 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:11:27.177159    2660 kubeadm.go:310] 
	I0923 11:11:27.177159    2660 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:11:27.177159    2660 kubeadm.go:310] 
	I0923 11:11:27.177159    2660 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:11:27.177159    2660 kubeadm.go:310] 
	I0923 11:11:27.177159    2660 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:11:27.177159    2660 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:11:27.177761    2660 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:11:27.177792    2660 kubeadm.go:310] 
	I0923 11:11:27.177914    2660 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:11:27.178042    2660 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:11:27.178042    2660 kubeadm.go:310] 
	I0923 11:11:27.178042    2660 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pbhzlg.1izy3culshbgxmlh \
	I0923 11:11:27.178042    2660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 \
	I0923 11:11:27.178042    2660 kubeadm.go:310] 	--control-plane 
	I0923 11:11:27.178042    2660 kubeadm.go:310] 
	I0923 11:11:27.178639    2660 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:11:27.178639    2660 kubeadm.go:310] 
	I0923 11:11:27.178700    2660 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pbhzlg.1izy3culshbgxmlh \
	I0923 11:11:27.178700    2660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 11:11:27.178700    2660 cni.go:84] Creating CNI manager for ""
	I0923 11:11:27.178700    2660 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:11:27.181992    2660 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 11:11:27.192002    2660 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 11:11:27.208713    2660 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 11:11:27.244128    2660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:11:27.255244    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:27.258241    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-526200 minikube.k8s.io/updated_at=2024_09_23T11_11_27_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-526200 minikube.k8s.io/primary=true
	I0923 11:11:27.264821    2660 ops.go:34] apiserver oom_adj: -16
	I0923 11:11:27.387813    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:27.889681    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:28.391467    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:28.891018    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:29.388573    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:29.887537    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:30.396742    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:30.891943    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:31.389420    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:31.888783    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:32.388935    2660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:11:32.534911    2660 kubeadm.go:1113] duration metric: took 5.2903143s to wait for elevateKubeSystemPrivileges
	I0923 11:11:32.534911    2660 kubeadm.go:394] duration metric: took 17.1987078s to StartCluster
	I0923 11:11:32.534911    2660 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:32.534911    2660 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:11:32.536068    2660 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:11:32.538051    2660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:11:32.538247    2660 start.go:235] Will wait 6m0s for node &{Name: IP:172.19.158.244 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:11:32.538314    2660 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:11:32.538542    2660 addons.go:69] Setting yakd=true in profile "addons-526200"
	I0923 11:11:32.538542    2660 addons.go:234] Setting addon yakd=true in "addons-526200"
	I0923 11:11:32.538542    2660 config.go:182] Loaded profile config "addons-526200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:11:32.538542    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.539072    2660 addons.go:69] Setting default-storageclass=true in profile "addons-526200"
	I0923 11:11:32.539177    2660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-526200"
	I0923 11:11:32.539177    2660 addons.go:69] Setting inspektor-gadget=true in profile "addons-526200"
	I0923 11:11:32.539275    2660 addons.go:234] Setting addon inspektor-gadget=true in "addons-526200"
	I0923 11:11:32.539275    2660 addons.go:69] Setting cloud-spanner=true in profile "addons-526200"
	I0923 11:11:32.539436    2660 addons.go:234] Setting addon cloud-spanner=true in "addons-526200"
	I0923 11:11:32.539548    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.539548    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.539548    2660 addons.go:69] Setting ingress=true in profile "addons-526200"
	I0923 11:11:32.540098    2660 addons.go:234] Setting addon ingress=true in "addons-526200"
	I0923 11:11:32.540229    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.540292    2660 addons.go:69] Setting ingress-dns=true in profile "addons-526200"
	I0923 11:11:32.540292    2660 addons.go:69] Setting metrics-server=true in profile "addons-526200"
	I0923 11:11:32.540292    2660 addons.go:234] Setting addon ingress-dns=true in "addons-526200"
	I0923 11:11:32.540464    2660 addons.go:234] Setting addon metrics-server=true in "addons-526200"
	I0923 11:11:32.540549    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.540549    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.540549    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.540549    2660 addons.go:69] Setting storage-provisioner=true in profile "addons-526200"
	I0923 11:11:32.540549    2660 addons.go:234] Setting addon storage-provisioner=true in "addons-526200"
	I0923 11:11:32.540549    2660 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-526200"
	I0923 11:11:32.540549    2660 addons.go:69] Setting registry=true in profile "addons-526200"
	I0923 11:11:32.540549    2660 addons.go:234] Setting addon registry=true in "addons-526200"
	I0923 11:11:32.540549    2660 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-526200"
	I0923 11:11:32.540549    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.541091    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.541231    2660 addons.go:69] Setting gcp-auth=true in profile "addons-526200"
	I0923 11:11:32.541835    2660 addons.go:69] Setting volcano=true in profile "addons-526200"
	I0923 11:11:32.541835    2660 addons.go:234] Setting addon volcano=true in "addons-526200"
	I0923 11:11:32.541835    2660 mustload.go:65] Loading cluster: addons-526200
	I0923 11:11:32.541835    2660 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-526200"
	I0923 11:11:32.541835    2660 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-526200"
	I0923 11:11:32.540549    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.541835    2660 addons.go:69] Setting volumesnapshots=true in profile "addons-526200"
	I0923 11:11:32.542449    2660 addons.go:234] Setting addon volumesnapshots=true in "addons-526200"
	I0923 11:11:32.542576    2660 config.go:182] Loaded profile config "addons-526200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:11:32.542637    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.542684    2660 out.go:177] * Verifying Kubernetes components...
	I0923 11:11:32.541835    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.540549    2660 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-526200"
	I0923 11:11:32.543051    2660 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-526200"
	I0923 11:11:32.543306    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:32.543474    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.544183    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.546000    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.546381    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.546381    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.546381    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.547080    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.549988    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.564146    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.565586    2660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:11:32.566861    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.568986    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.569598    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.569598    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:32.572559    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:33.355225    2660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:11:33.756836    2660 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.1911697s)
	I0923 11:11:34.245798    2660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:11:34.799259    2660 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.4430385s)
	I0923 11:11:34.799259    2660 start.go:971] {"host.minikube.internal": 172.19.144.1} host record injected into CoreDNS's ConfigMap
	I0923 11:11:34.807243    2660 node_ready.go:35] waiting up to 6m0s for node "addons-526200" to be "Ready" ...
	I0923 11:11:35.026033    2660 node_ready.go:49] node "addons-526200" has status "Ready":"True"
	I0923 11:11:35.026033    2660 node_ready.go:38] duration metric: took 218.7753ms for node "addons-526200" to be "Ready" ...
	I0923 11:11:35.026033    2660 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:11:35.262824    2660 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:35.984856    2660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-526200" context rescaled to 1 replicas
	I0923 11:11:37.464867    2660 pod_ready.go:103] pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace has status "Ready":"False"
	I0923 11:11:38.278623    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.278623    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.283046    2660 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 11:11:38.289679    2660 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 11:11:38.297160    2660 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 11:11:38.302349    2660 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:11:38.302349    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 11:11:38.302349    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:38.310393    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.310393    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.325392    2660 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 11:11:38.328398    2660 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:11:38.328398    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 11:11:38.328398    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:38.387340    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.387340    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.389779    2660 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-526200"
	I0923 11:11:38.389779    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:38.390776    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:38.394940    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.394940    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.394940    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:38.421434    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.421434    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.434447    2660 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:11:38.435444    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.435444    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.452818    2660 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:11:38.457428    2660 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:11:38.457428    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:11:38.458432    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:38.445437    2660 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:11:38.465530    2660 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:11:38.467540    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:38.661605    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.661605    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.663612    2660 addons.go:234] Setting addon default-storageclass=true in "addons-526200"
	I0923 11:11:38.663612    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:38.665610    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:38.731543    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.731543    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.740271    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:11:38.747269    2660 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:11:38.747269    2660 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:11:38.747269    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:38.754356    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.754356    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.760367    2660 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:11:38.767358    2660 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:11:38.771549    2660 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:11:38.771549    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:11:38.771549    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:38.980825    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.980825    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:38.985824    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:38.985824    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:39.001829    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:11:39.031119    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:39.031119    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:39.022036    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:39.027122    2660 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:11:39.039548    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:39.044456    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:11:39.046380    2660 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:11:39.049373    2660 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:11:39.070297    2660 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 11:11:39.072242    2660 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:11:39.072326    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:39.085526    2660 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:11:39.093527    2660 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:11:39.093613    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:11:39.093718    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:39.105323    2660 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:11:39.105929    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:11:39.142964    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:11:39.146459    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:11:39.133936    2660 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:11:39.148905    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:11:39.152144    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:11:39.155032    2660 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:11:39.159260    2660 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:11:39.159260    2660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:11:39.159260    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:39.148905    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 11:11:39.169844    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:39.735344    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:39.735344    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:39.739785    2660 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:11:39.741769    2660 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:11:39.741769    2660 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:11:39.741769    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:40.084779    2660 pod_ready.go:103] pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace has status "Ready":"False"
	I0923 11:11:41.465232    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:41.465232    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:41.481223    2660 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:11:41.488232    2660 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:11:41.488232    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:11:41.488232    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:42.114830    2660 pod_ready.go:103] pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace has status "Ready":"False"
	I0923 11:11:43.125287    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:43.125372    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:43.130813    2660 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:11:43.139518    2660 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:11:43.143280    2660 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:11:43.143280    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:11:43.143280    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:43.483970    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:43.483970    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:43.483970    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:43.641970    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:43.641970    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:43.641970    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:43.869981    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:43.869981    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:43.869981    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:43.892045    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:43.892699    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:43.892699    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:44.350649    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:44.350649    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:44.350649    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:44.378397    2660 pod_ready.go:103] pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace has status "Ready":"False"
	I0923 11:11:44.432561    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:44.432561    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:44.432561    2660 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:11:44.432561    2660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:11:44.432561    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:44.441525    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:44.441525    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:44.441525    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:44.715841    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:44.716833    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:44.716833    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:44.880466    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:44.880537    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:44.880537    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:44.969574    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:44.969574    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:44.969574    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:45.015211    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:45.015211    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:45.015211    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:45.856953    2660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:11:45.857950    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:45.872945    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:45.872945    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:45.872945    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:46.520120    2660 pod_ready.go:103] pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace has status "Ready":"False"
	I0923 11:11:47.514187    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:47.514187    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:47.514187    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:48.793334    2660 pod_ready.go:103] pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace has status "Ready":"False"
	I0923 11:11:48.822836    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:48.822836    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:48.823839    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:49.368755    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:49.368755    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:49.368755    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:50.019219    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:50.019294    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.019483    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:50.085375    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:50.085375    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.086375    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:50.228240    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:50.229251    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.229251    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:50.314961    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:50.314961    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.315252    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:50.318274    2660 pod_ready.go:98] pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:50 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.19.158.244 HostIPs:[{IP:172.19.158
.244}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 11:11:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 11:11:42 +0000 UTC,FinishedAt:2024-09-23 11:11:49 +0000 UTC,ContainerID:docker://26af4e457c44b6db2a02d684e0eaf1e0e4a1fdc1d9f107c4a0f14bd480cabbaf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://26af4e457c44b6db2a02d684e0eaf1e0e4a1fdc1d9f107c4a0f14bd480cabbaf Started:0xc001d0ad00 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001747490} {Name:kube-api-access-vrtzz MountPath:/var/run/secrets/kubernetes.io/serviceacc
ount ReadOnly:true RecursiveReadOnly:0xc0017474a0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 11:11:50.318274    2660 pod_ready.go:82] duration metric: took 15.054433s for pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace to be "Ready" ...
	E0923 11:11:50.318274    2660 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-hp27v" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:50 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:11:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.19.15
8.244 HostIPs:[{IP:172.19.158.244}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-23 11:11:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 11:11:42 +0000 UTC,FinishedAt:2024-09-23 11:11:49 +0000 UTC,ContainerID:docker://26af4e457c44b6db2a02d684e0eaf1e0e4a1fdc1d9f107c4a0f14bd480cabbaf,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://26af4e457c44b6db2a02d684e0eaf1e0e4a1fdc1d9f107c4a0f14bd480cabbaf Started:0xc001d0ad00 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001747490} {Name:kube-api-access-vrtzz MountPath:/var/run/sec
rets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0017474a0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 11:11:50.318401    2660 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-skqnk" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.344859    2660 pod_ready.go:93] pod "coredns-7c65d6cfc9-skqnk" in "kube-system" namespace has status "Ready":"True"
	I0923 11:11:50.344859    2660 pod_ready.go:82] duration metric: took 26.456ms for pod "coredns-7c65d6cfc9-skqnk" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.344859    2660 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-526200" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.375243    2660 pod_ready.go:93] pod "etcd-addons-526200" in "kube-system" namespace has status "Ready":"True"
	I0923 11:11:50.375328    2660 pod_ready.go:82] duration metric: took 30.4672ms for pod "etcd-addons-526200" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.375328    2660 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-526200" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.394638    2660 pod_ready.go:93] pod "kube-apiserver-addons-526200" in "kube-system" namespace has status "Ready":"True"
	I0923 11:11:50.394638    2660 pod_ready.go:82] duration metric: took 19.309ms for pod "kube-apiserver-addons-526200" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.394638    2660 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-526200" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.415375    2660 pod_ready.go:93] pod "kube-controller-manager-addons-526200" in "kube-system" namespace has status "Ready":"True"
	I0923 11:11:50.415919    2660 pod_ready.go:82] duration metric: took 21.2796ms for pod "kube-controller-manager-addons-526200" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.415980    2660 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-t856x" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.438242    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:50.438242    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.439168    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:50.504188    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:11:50.558386    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:50.558434    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.558664    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:50.621716    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:50.621716    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.621716    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:50.635717    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:11:50.669009    2660 pod_ready.go:93] pod "kube-proxy-t856x" in "kube-system" namespace has status "Ready":"True"
	I0923 11:11:50.669009    2660 pod_ready.go:82] duration metric: took 253.0115ms for pod "kube-proxy-t856x" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.669009    2660 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-526200" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:50.703353    2660 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:11:50.703353    2660 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:11:50.743276    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:50.743276    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.743831    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:50.815011    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:50.815011    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:50.815011    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:11:50.848102    2660 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:11:50.848102    2660 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:11:50.864858    2660 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:11:50.864858    2660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:11:50.991692    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:11:51.077246    2660 pod_ready.go:93] pod "kube-scheduler-addons-526200" in "kube-system" namespace has status "Ready":"True"
	I0923 11:11:51.077246    2660 pod_ready.go:82] duration metric: took 408.2097ms for pod "kube-scheduler-addons-526200" in "kube-system" namespace to be "Ready" ...
	I0923 11:11:51.077246    2660 pod_ready.go:39] duration metric: took 16.0501277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:11:51.077972    2660 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:11:51.093289    2660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:11:51.151059    2660 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:11:51.151059    2660 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:11:51.197531    2660 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:11:51.197627    2660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:11:51.204131    2660 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:11:51.204259    2660 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:11:51.343707    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:51.343707    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:51.343707    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:51.384474    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:11:51.389406    2660 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:11:51.389406    2660 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:11:51.430167    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:51.430350    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:51.430421    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:51.441775    2660 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:11:51.441849    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:11:51.482996    2660 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:11:51.483073    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:11:51.483073    2660 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:11:51.483073    2660 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:11:51.496835    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:51.496835    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:51.497546    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:51.562096    2660 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:11:51.562096    2660 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:11:51.652556    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:11:51.683704    2660 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:11:51.683704    2660 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:11:51.698585    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:11:51.771785    2660 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:11:51.771785    2660 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:11:51.840104    2660 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:11:51.840104    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:11:51.969337    2660 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:11:51.969337    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:11:52.032148    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:52.032333    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:52.032704    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:52.049870    2660 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:11:52.049870    2660 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:11:52.074972    2660 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:11:52.074972    2660 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:11:52.080987    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:11:52.169554    2660 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:11:52.170539    2660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:11:52.220189    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:11:52.297336    2660 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:11:52.297336    2660 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:11:52.306161    2660 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:11:52.306236    2660 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:11:52.400761    2660 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:11:52.400761    2660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:11:52.472856    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:52.472856    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:52.473046    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:52.561984    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:11:52.572888    2660 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:11:52.572888    2660 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:11:52.658217    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:52.658217    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:52.658217    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:52.680271    2660 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:11:52.680557    2660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:11:52.703558    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.199161s)
	I0923 11:11:52.766640    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:11:52.888687    2660 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:11:52.888687    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:11:52.959545    2660 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:11:52.959611    2660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:11:53.168768    2660 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:11:53.168826    2660 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:11:53.421387    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:11:53.426433    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:11:53.536653    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:11:53.576086    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:11:53.576884    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:53.576884    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:11:53.703676    2660 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:11:53.703754    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:11:54.233741    2660 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:11:54.233741    2660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:11:54.825357    2660 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:11:54.825357    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:11:54.935139    2660 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:11:55.652334    2660 addons.go:234] Setting addon gcp-auth=true in "addons-526200"
	I0923 11:11:55.652334    2660 host.go:66] Checking if "addons-526200" exists ...
	I0923 11:11:55.654247    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:55.666708    2660 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:11:55.666708    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:11:56.305831    2660 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:11:56.305831    2660 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:11:56.983179    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:11:57.673125    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:57.673379    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:57.684373    2660 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:11:57.684373    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-526200 ).state
	I0923 11:11:59.675731    2660 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:11:59.675731    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:11:59.675731    2660 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-526200 ).networkadapters[0]).ipaddresses[0]
	I0923 11:12:02.201975    2660 main.go:141] libmachine: [stdout =====>] : 172.19.158.244
	
	I0923 11:12:02.202260    2660 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:12:02.202322    2660 sshutil.go:53] new ssh client: &{IP:172.19.158.244 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-526200\id_rsa Username:docker}
	I0923 11:12:04.236611    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (13.5999745s)
	I0923 11:12:04.236795    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (13.2442081s)
	I0923 11:12:04.236795    2660 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (13.1426173s)
	I0923 11:12:04.236795    2660 api_server.go:72] duration metric: took 31.6963385s to wait for apiserver process to appear ...
	I0923 11:12:04.236795    2660 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:12:04.236795    2660 api_server.go:253] Checking apiserver healthz at https://172.19.158.244:8443/healthz ...
	I0923 11:12:04.236795    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.851452s)
	I0923 11:12:04.236795    2660 addons.go:475] Verifying addon ingress=true in "addons-526200"
	I0923 11:12:04.236795    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.5823778s)
	I0923 11:12:04.236795    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (12.5373627s)
	I0923 11:12:04.236795    2660 addons.go:475] Verifying addon registry=true in "addons-526200"
	I0923 11:12:04.237562    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.15569s)
	I0923 11:12:04.237717    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (12.0167155s)
	W0923 11:12:04.240176    2660 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:12:04.238340    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.6755153s)
	I0923 11:12:04.240176    2660 retry.go:31] will retry after 246.653256ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:12:04.238375    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.4709597s)
	I0923 11:12:04.240176    2660 addons.go:475] Verifying addon metrics-server=true in "addons-526200"
	I0923 11:12:04.238375    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.8162573s)
	I0923 11:12:04.238375    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.8112111s)
	I0923 11:12:04.238375    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.7009986s)
	I0923 11:12:04.239676    2660 out.go:177] * Verifying ingress addon...
	I0923 11:12:04.243834    2660 out.go:177] * Verifying registry addon...
	I0923 11:12:04.249369    2660 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-526200 service yakd-dashboard -n yakd-dashboard
	
	I0923 11:12:04.253566    2660 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 11:12:04.255453    2660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:12:04.269204    2660 api_server.go:279] https://172.19.158.244:8443/healthz returned 200:
	ok
	I0923 11:12:04.273796    2660 api_server.go:141] control plane version: v1.31.1
	I0923 11:12:04.273874    2660 api_server.go:131] duration metric: took 37.076ms to wait for apiserver health ...
	I0923 11:12:04.273917    2660 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:12:04.287590    2660 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:12:04.287590    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:04.288179    2660 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 11:12:04.288179    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:04.298120    2660 system_pods.go:59] 14 kube-system pods found
	I0923 11:12:04.298192    2660 system_pods.go:61] "coredns-7c65d6cfc9-skqnk" [01191c5a-fcfc-4ccd-a575-718162a3dffa] Running
	I0923 11:12:04.298192    2660 system_pods.go:61] "etcd-addons-526200" [79bb5b5f-83b5-4a11-bd40-56b89c27294e] Running
	I0923 11:12:04.298192    2660 system_pods.go:61] "kube-apiserver-addons-526200" [09169d7c-6985-4c00-adbd-780c62af003e] Running
	I0923 11:12:04.298192    2660 system_pods.go:61] "kube-controller-manager-addons-526200" [dbceaed1-353f-4985-ab13-5ba4f10e41f1] Running
	I0923 11:12:04.298264    2660 system_pods.go:61] "kube-ingress-dns-minikube" [fc7573e8-45f2-4365-8309-41219ce25bfe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 11:12:04.298264    2660 system_pods.go:61] "kube-proxy-t856x" [3e8fb300-6bd4-4191-9db3-721b1d2d1252] Running
	I0923 11:12:04.298264    2660 system_pods.go:61] "kube-scheduler-addons-526200" [ae457c5f-0e35-4dce-8014-69f9e250437b] Running
	I0923 11:12:04.298305    2660 system_pods.go:61] "metrics-server-84c5f94fbc-xhdnw" [c77a3e02-02fe-46db-9ee9-7ccb8e036301] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:12:04.298305    2660 system_pods.go:61] "nvidia-device-plugin-daemonset-dkcn2" [750dbf44-39a9-49fa-b3fd-2d026fcd91aa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 11:12:04.298305    2660 system_pods.go:61] "registry-66c9cd494c-2x52r" [56baf2a1-7092-48e6-bb7f-3ff56671ab95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:12:04.298341    2660 system_pods.go:61] "registry-proxy-lxdlz" [8fba184d-ef78-4494-8f07-f1c7b3232682] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:12:04.298341    2660 system_pods.go:61] "snapshot-controller-56fcc65765-qpgw7" [fbaee4ef-3630-4772-9dcd-f817badc7029] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:12:04.298341    2660 system_pods.go:61] "snapshot-controller-56fcc65765-wl9v6" [df63fac8-b18b-4446-91db-2a8ea49c6b4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:12:04.298341    2660 system_pods.go:61] "storage-provisioner" [d611da08-9001-4435-9814-1ca96ceadbe2] Running
	I0923 11:12:04.298341    2660 system_pods.go:74] duration metric: took 24.4218ms to wait for pod list to return data ...
	I0923 11:12:04.298417    2660 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:12:04.331315    2660 default_sa.go:45] found service account: "default"
	I0923 11:12:04.331373    2660 default_sa.go:55] duration metric: took 32.9538ms for default service account to be created ...
	I0923 11:12:04.331373    2660 system_pods.go:116] waiting for k8s-apps to be running ...
	W0923 11:12:04.331788    2660 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 11:12:04.382969    2660 system_pods.go:86] 14 kube-system pods found
	I0923 11:12:04.382969    2660 system_pods.go:89] "coredns-7c65d6cfc9-skqnk" [01191c5a-fcfc-4ccd-a575-718162a3dffa] Running
	I0923 11:12:04.383026    2660 system_pods.go:89] "etcd-addons-526200" [79bb5b5f-83b5-4a11-bd40-56b89c27294e] Running
	I0923 11:12:04.383026    2660 system_pods.go:89] "kube-apiserver-addons-526200" [09169d7c-6985-4c00-adbd-780c62af003e] Running
	I0923 11:12:04.383026    2660 system_pods.go:89] "kube-controller-manager-addons-526200" [dbceaed1-353f-4985-ab13-5ba4f10e41f1] Running
	I0923 11:12:04.383058    2660 system_pods.go:89] "kube-ingress-dns-minikube" [fc7573e8-45f2-4365-8309-41219ce25bfe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 11:12:04.383058    2660 system_pods.go:89] "kube-proxy-t856x" [3e8fb300-6bd4-4191-9db3-721b1d2d1252] Running
	I0923 11:12:04.383058    2660 system_pods.go:89] "kube-scheduler-addons-526200" [ae457c5f-0e35-4dce-8014-69f9e250437b] Running
	I0923 11:12:04.383058    2660 system_pods.go:89] "metrics-server-84c5f94fbc-xhdnw" [c77a3e02-02fe-46db-9ee9-7ccb8e036301] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:12:04.383058    2660 system_pods.go:89] "nvidia-device-plugin-daemonset-dkcn2" [750dbf44-39a9-49fa-b3fd-2d026fcd91aa] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 11:12:04.383058    2660 system_pods.go:89] "registry-66c9cd494c-2x52r" [56baf2a1-7092-48e6-bb7f-3ff56671ab95] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:12:04.383115    2660 system_pods.go:89] "registry-proxy-lxdlz" [8fba184d-ef78-4494-8f07-f1c7b3232682] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:12:04.383115    2660 system_pods.go:89] "snapshot-controller-56fcc65765-qpgw7" [fbaee4ef-3630-4772-9dcd-f817badc7029] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:12:04.383115    2660 system_pods.go:89] "snapshot-controller-56fcc65765-wl9v6" [df63fac8-b18b-4446-91db-2a8ea49c6b4a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:12:04.383115    2660 system_pods.go:89] "storage-provisioner" [d611da08-9001-4435-9814-1ca96ceadbe2] Running
	I0923 11:12:04.383147    2660 system_pods.go:126] duration metric: took 51.7704ms to wait for k8s-apps to be running ...
	I0923 11:12:04.383147    2660 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:12:04.393353    2660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:12:04.498187    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:12:04.766888    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:04.767124    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:05.301292    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:05.303312    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:05.397042    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.413294s)
	I0923 11:12:05.397042    2660 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.7121475s)
	I0923 11:12:05.397042    2660 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-526200"
	I0923 11:12:05.397042    2660 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.0036217s)
	I0923 11:12:05.397042    2660 system_svc.go:56] duration metric: took 1.0138267s WaitForService to wait for kubelet
	I0923 11:12:05.397042    2660 kubeadm.go:582] duration metric: took 32.856507s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:12:05.397042    2660 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:12:05.402511    2660 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:12:05.403051    2660 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:12:05.407518    2660 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:12:05.409484    2660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:12:05.410045    2660 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:12:05.410045    2660 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:12:05.447000    2660 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 11:12:05.447000    2660 node_conditions.go:123] node cpu capacity is 2
	I0923 11:12:05.447000    2660 node_conditions.go:105] duration metric: took 49.9544ms to run NodePressure ...
	I0923 11:12:05.447000    2660 start.go:241] waiting for startup goroutines ...
	I0923 11:12:05.447922    2660 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:12:05.447922    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:05.581801    2660 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:12:05.581840    2660 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:12:05.701502    2660 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:12:05.701502    2660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:12:05.746545    2660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:12:05.791113    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:05.791718    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:05.915141    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:06.265753    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:06.265753    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:06.415719    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:06.553334    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.0548883s)
	I0923 11:12:06.761031    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:06.762908    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:06.931867    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:07.151099    2660 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.4044588s)
	I0923 11:12:07.160075    2660 addons.go:475] Verifying addon gcp-auth=true in "addons-526200"
	I0923 11:12:07.162701    2660 out.go:177] * Verifying gcp-auth addon...
	I0923 11:12:07.166841    2660 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:12:07.256017    2660 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:12:07.355570    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:07.358248    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:07.457557    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:07.761164    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:07.763535    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:07.916406    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:08.282168    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:08.283202    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:08.416591    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:08.759343    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:08.760312    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:08.917773    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:09.261296    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:09.261779    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:09.417197    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:09.777635    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:09.777635    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:09.916957    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:10.258480    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:10.259453    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:10.417220    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:10.762169    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:10.762304    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:10.917650    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:11.261106    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:11.261528    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:11.418204    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:11.760990    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:11.764093    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:11.918759    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:12.261607    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:12.263964    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:12.417291    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:12.761200    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:12.762921    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:12.981387    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:13.261438    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:13.266035    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:13.416356    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:13.762340    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:13.764382    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:13.916159    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:14.259702    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:14.261420    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:14.416935    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:14.760474    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:14.761486    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:14.928906    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:15.260283    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:15.261791    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:15.416177    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:15.762589    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:15.762841    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:15.917013    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:16.260592    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:16.261814    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:16.417689    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:16.759495    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:16.762185    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:16.918143    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:17.259542    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:17.260522    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:17.418028    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:18.145198    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:18.145879    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:18.146369    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:18.548177    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:18.548811    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:18.549532    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:18.761400    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:18.761578    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:18.920251    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:19.263373    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:19.264262    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:19.421205    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:19.761875    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:19.761875    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:19.918111    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:20.261228    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:20.277231    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:20.416905    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:20.760320    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:20.762303    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:20.918209    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:21.261239    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:21.261239    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:21.417980    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:21.761691    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:21.762703    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:21.916750    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:22.260782    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:22.264335    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:22.418225    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:22.762097    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:22.763418    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:22.916292    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:23.262511    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:23.266313    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:23.418336    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:23.760164    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:23.763166    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:23.917150    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:24.263941    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:24.264392    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:24.417313    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:24.765850    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:24.768712    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:24.919331    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:26.735332    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:26.735332    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:26.735332    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:26.794778    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:26.796274    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:26.796274    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:26.876624    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:26.877256    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:26.920662    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:27.261684    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:27.263854    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:27.417176    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:27.776346    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:27.776346    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:27.969460    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:28.262101    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:28.263615    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:28.419443    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:28.762272    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:28.762458    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:28.917413    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:29.262507    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:29.263509    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:29.417516    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:29.761595    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:29.765171    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:29.920103    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:30.261566    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:30.263348    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:30.417573    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:30.762664    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:30.767678    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:30.919539    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:31.267327    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:31.267617    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:31.419287    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:31.763452    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:31.764642    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:31.919537    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:32.262178    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:32.266869    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:32.418825    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:32.762972    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:32.763517    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:32.919480    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:33.262040    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:33.263871    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:33.418318    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:33.761118    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:33.765198    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:33.919410    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:34.264212    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:34.267013    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:34.419051    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:34.763952    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:34.764462    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:34.918438    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:35.260724    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:35.264076    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:35.418272    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:35.762827    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:35.765638    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:35.918751    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:36.262891    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:36.265642    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:36.418992    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:36.763428    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:36.763428    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:36.920308    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:37.562635    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:37.563444    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:37.564275    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:37.897605    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:37.898211    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:37.918949    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:38.261291    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:38.264423    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:38.435991    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:38.779295    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:38.780580    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:38.918240    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:39.261371    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:39.264186    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:39.432022    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:39.761455    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:39.762458    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:39.918064    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:40.261921    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:40.273255    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:40.418238    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:40.763551    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:40.763691    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:40.919025    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:41.265175    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:41.266314    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:41.418408    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:41.763193    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:41.764203    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:41.939611    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:42.262039    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:42.275872    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:42.420095    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:42.763860    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:42.763860    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:42.920019    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:43.262441    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:43.262441    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:43.419740    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:43.762504    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:43.764389    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:43.919966    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:44.264282    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:44.266338    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:44.418632    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:44.764765    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:44.764836    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:44.918455    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:45.265565    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:45.266587    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:45.420799    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:45.761795    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:45.764897    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:45.920279    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:46.263085    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:46.265171    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:46.419355    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:46.765308    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:46.768327    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:46.919453    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:47.262470    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:47.263441    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:47.419696    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:47.763485    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:47.764749    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:47.918603    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:48.265545    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:48.266000    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:48.419848    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:48.761654    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:48.762654    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:48.919463    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:49.263854    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:49.267223    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:49.416900    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:49.763005    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:49.763005    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:49.921357    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:50.882985    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:50.889299    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:50.893226    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:50.899441    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:50.900847    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:50.918646    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:51.260976    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:51.262972    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:51.418095    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:51.762455    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:51.764486    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:51.918139    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:52.264069    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:52.265844    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:52.420803    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:52.763165    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:52.764138    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:53.346050    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:53.346167    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:53.346601    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:53.420575    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:53.763360    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:53.765361    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:53.920809    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:54.263819    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:54.265027    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:54.418726    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:54.776490    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:54.777154    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:54.920591    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:55.261740    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:55.263709    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:55.419362    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:55.762997    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:55.764953    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:55.920133    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:56.266718    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:56.266718    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:56.419229    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:56.884408    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:56.885073    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:56.991729    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:57.262433    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:57.265928    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:57.421115    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:57.761694    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:57.763687    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:12:57.919785    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:58.264628    2660 kapi.go:107] duration metric: took 54.0055263s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:12:58.268253    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:58.418507    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:58.764732    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:58.919156    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:59.286248    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:59.421182    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:12:59.763769    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:12:59.920317    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:00.265642    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:00.420804    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:00.764419    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:00.921796    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:01.266701    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:01.421853    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:01.762898    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:01.920318    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:02.264128    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:02.419385    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:02.763702    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:02.920068    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:03.264274    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:03.700107    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:03.844248    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:03.920004    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:04.634475    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:04.635533    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:04.764117    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:04.920012    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:05.297356    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:05.419338    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:05.842269    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:05.951772    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:06.263690    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:06.423107    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:06.763011    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:06.919533    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:07.262386    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:07.419858    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:07.763861    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:07.921623    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:08.264198    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:08.420543    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:09.050841    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:09.051057    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:09.264577    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:09.426209    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:09.764465    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:09.920158    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:10.275445    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:10.442743    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:10.764954    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:10.921426    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:11.265716    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:11.421290    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:11.889188    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:12.189220    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:12.342816    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:12.439583    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:12.812995    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:12.920275    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:13.264316    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:13.421375    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:13.763586    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:13.920386    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:14.350701    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:14.421185    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:14.764382    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:14.922160    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:15.264492    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:15.440215    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:15.764239    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:15.922132    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:16.266368    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:16.423388    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:16.770055    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:16.921477    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:17.266882    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:17.423213    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:17.767148    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:17.920046    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:18.264409    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:18.423342    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:18.766355    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:18.922153    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:19.265331    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:19.422137    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:19.766298    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:19.921419    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:20.264430    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:20.544706    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:20.763624    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:21.130834    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:21.265835    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:21.423645    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:21.765052    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:21.921218    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:22.266998    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:22.420245    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:23.145070    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:23.146680    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:23.329718    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:23.421801    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:23.765274    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:23.921941    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:24.264510    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:24.422203    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:24.766337    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:24.922418    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:25.268844    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:25.422682    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:25.764263    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:25.921857    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:26.265825    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:26.421586    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:26.764584    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:26.922267    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:27.266944    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:27.420760    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:27.765198    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:27.921895    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:28.284213    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:28.688613    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:28.798685    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:28.923783    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:29.266595    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:29.422981    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:29.768960    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:29.923367    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:30.268257    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:30.420782    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:30.772392    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:30.925633    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:31.271772    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:31.422062    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:31.845364    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:31.933011    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:32.265400    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:32.421586    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:32.765787    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:32.934888    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:33.265199    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:33.425079    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:33.764374    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:33.922273    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:34.265855    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:34.423062    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:34.766558    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:34.923400    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:35.265172    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:35.833394    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:35.834182    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:35.997063    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:36.308489    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:36.426095    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:36.765493    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:36.922869    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:37.265271    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:37.424869    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:37.766678    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:37.922897    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:38.267537    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:38.425188    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:38.765952    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:38.923020    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:39.535182    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:39.535921    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:39.781420    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:39.925577    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:40.266457    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:40.427770    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:40.788110    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:40.924058    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:41.266804    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:41.422818    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:41.767051    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:41.923499    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:42.265999    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:42.422969    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:42.940296    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:42.940537    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:43.267720    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:43.426288    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:43.766060    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:43.926486    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:44.305549    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:44.424392    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:44.767342    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:44.922913    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:45.266170    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:45.425101    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:45.767895    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:45.923620    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:46.265720    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:46.422351    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:46.766211    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:46.924204    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:47.268542    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:47.424041    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:48.044354    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:48.046997    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:48.273814    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:48.428160    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:48.767418    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:48.924235    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:49.267994    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:49.422411    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:49.767597    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:49.922797    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:50.266040    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:50.423833    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:51.130616    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:51.132800    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:51.285339    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:51.423583    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:51.767026    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:51.924440    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:52.266875    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:52.424312    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:52.766704    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:52.924000    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:53.267496    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:53.427930    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:53.767570    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:53.923397    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:54.267345    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:54.423729    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:54.783400    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:54.924448    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:55.267457    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:55.430217    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:55.768655    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:55.924847    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:56.278850    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:56.423808    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:56.766886    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:56.923905    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:57.282535    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:57.425003    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:57.773617    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:57.925649    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:58.269456    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:58.425318    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:58.767492    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:58.924414    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:59.266409    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:59.423919    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:13:59.903556    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:13:59.923553    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:00.269730    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:00.424677    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:00.767844    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:00.926377    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:01.281352    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:01.423376    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:01.767538    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:02.052771    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:02.280110    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:02.425431    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:02.765943    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:02.923756    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:03.267382    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:03.423140    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:03.769323    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:03.925078    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:04.267439    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:04.425168    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:04.768816    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:04.924083    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:05.545303    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:05.545987    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:05.781839    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:05.927449    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:06.268280    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:06.424487    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:06.770956    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:06.924831    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:07.268020    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:07.424754    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:07.767896    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:07.922897    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:08.267503    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:08.505282    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:08.805241    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:08.923945    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:14:09.269813    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:09.426490    2660 kapi.go:107] duration metric: took 2m4.0086254s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:14:09.770576    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:10.269269    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:10.769394    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:11.270293    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:11.769416    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:12.270758    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:12.769540    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:13.268394    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:13.769161    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:14.269311    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:14.770100    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:15.291224    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:15.771346    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:16.271946    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:16.767566    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:17.267813    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:17.768046    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:18.270672    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:18.770609    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:19.269795    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:20.149971    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:20.269923    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:20.771380    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:21.604433    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:21.809668    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:22.268908    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:22.770196    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:23.267923    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:23.768848    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:24.270214    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:24.769670    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:25.269221    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:25.770747    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:26.270157    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:26.768899    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:27.271283    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:27.767752    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:28.270632    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:29.656184    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:29.662344    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:29.770949    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:30.270516    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:30.768670    2660 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:14:31.269875    2660 kapi.go:107] duration metric: took 2m27.0063739s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 11:14:51.193493    2660 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:14:51.193493    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:51.684663    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:52.185244    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:52.683791    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:53.184301    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:53.684138    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:54.186150    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:54.684634    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:55.184272    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:55.685017    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:56.184392    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:56.685057    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:57.183607    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:57.683728    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:58.183572    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:58.687170    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:59.185107    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:14:59.684159    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:00.184725    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:00.684326    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:01.184214    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:01.684645    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:02.184798    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:02.684482    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:03.183842    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:03.684451    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:04.184638    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:04.684449    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:05.185139    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:05.684622    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:06.184707    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:06.685577    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:07.185009    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:07.684436    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:08.183882    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:08.686047    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:09.184696    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:09.685445    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:10.185663    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:10.684576    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:11.183994    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:11.690598    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:12.184155    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:12.685377    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:13.185097    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:13.685307    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:14.185403    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:14.685532    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:15.188446    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:15.686080    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:16.184894    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:16.686321    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:17.185609    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:17.685991    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:18.185417    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:18.687060    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:19.185912    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:19.687261    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:20.186521    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:20.686038    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:21.186822    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:21.691661    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:22.194046    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:22.686526    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:23.186901    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:23.684559    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:24.185666    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:24.685926    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:25.185908    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:25.686539    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:26.184920    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:26.686771    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:27.187034    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:27.685295    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:28.210097    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:28.686157    2660 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:15:29.187358    2660 kapi.go:107] duration metric: took 3m22.0068647s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 11:15:29.190162    2660 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-526200 cluster.
	I0923 11:15:29.193108    2660 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 11:15:29.195541    2660 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 11:15:29.197801    2660 out.go:177] * Enabled addons: ingress-dns, volcano, cloud-spanner, storage-provisioner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 11:15:29.202305    2660 addons.go:510] duration metric: took 3m56.6479974s for enable addons: enabled=[ingress-dns volcano cloud-spanner storage-provisioner nvidia-device-plugin inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 11:15:29.202305    2660 start.go:246] waiting for cluster config update ...
	I0923 11:15:29.202305    2660 start.go:255] writing updated cluster config ...
	I0923 11:15:29.212611    2660 ssh_runner.go:195] Run: rm -f paused
	I0923 11:15:29.431768    2660 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:15:29.437331    2660 out.go:177] * Done! kubectl is now configured to use "addons-526200" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 11:26:43 addons-526200 dockerd[1428]: time="2024-09-23T11:26:43.208410182Z" level=info msg="shim disconnected" id=76f13ea6afcddb1631079c1ce92f25c7b22d8400da8986cac93eb65d3eb0d9ce namespace=moby
	Sep 23 11:26:43 addons-526200 dockerd[1428]: time="2024-09-23T11:26:43.208926618Z" level=warning msg="cleaning up after shim disconnected" id=76f13ea6afcddb1631079c1ce92f25c7b22d8400da8986cac93eb65d3eb0d9ce namespace=moby
	Sep 23 11:26:43 addons-526200 dockerd[1428]: time="2024-09-23T11:26:43.209125432Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:26:44 addons-526200 dockerd[1421]: time="2024-09-23T11:26:44.356681000Z" level=info msg="ignoring event" container=a25f88dcb3fd3958afa37bd1bdb5fc7e7fdfdba16880397fa60f0288aea1506c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:44 addons-526200 dockerd[1428]: time="2024-09-23T11:26:44.357541560Z" level=info msg="shim disconnected" id=a25f88dcb3fd3958afa37bd1bdb5fc7e7fdfdba16880397fa60f0288aea1506c namespace=moby
	Sep 23 11:26:44 addons-526200 dockerd[1428]: time="2024-09-23T11:26:44.358254710Z" level=warning msg="cleaning up after shim disconnected" id=a25f88dcb3fd3958afa37bd1bdb5fc7e7fdfdba16880397fa60f0288aea1506c namespace=moby
	Sep 23 11:26:44 addons-526200 dockerd[1428]: time="2024-09-23T11:26:44.358333515Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:26:44 addons-526200 dockerd[1421]: time="2024-09-23T11:26:44.542140823Z" level=info msg="ignoring event" container=1180129d66ec4752816774c0b5d670b31f0472d09dc6ef3778539a6cadca3e88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:44 addons-526200 dockerd[1428]: time="2024-09-23T11:26:44.543344807Z" level=info msg="shim disconnected" id=1180129d66ec4752816774c0b5d670b31f0472d09dc6ef3778539a6cadca3e88 namespace=moby
	Sep 23 11:26:44 addons-526200 dockerd[1428]: time="2024-09-23T11:26:44.543430013Z" level=warning msg="cleaning up after shim disconnected" id=1180129d66ec4752816774c0b5d670b31f0472d09dc6ef3778539a6cadca3e88 namespace=moby
	Sep 23 11:26:44 addons-526200 dockerd[1428]: time="2024-09-23T11:26:44.543448714Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:26:51 addons-526200 dockerd[1421]: time="2024-09-23T11:26:51.983445250Z" level=info msg="ignoring event" container=363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:51 addons-526200 dockerd[1428]: time="2024-09-23T11:26:51.984862248Z" level=info msg="shim disconnected" id=363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52 namespace=moby
	Sep 23 11:26:51 addons-526200 dockerd[1428]: time="2024-09-23T11:26:51.984984457Z" level=warning msg="cleaning up after shim disconnected" id=363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52 namespace=moby
	Sep 23 11:26:51 addons-526200 dockerd[1428]: time="2024-09-23T11:26:51.985009858Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:26:52 addons-526200 dockerd[1421]: time="2024-09-23T11:26:52.185267838Z" level=info msg="ignoring event" container=c5e9d5bd9e83f1433cab12da4bd8bcad41335d8fb6f342d05fcc2f3e12de7ecc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:26:52 addons-526200 dockerd[1428]: time="2024-09-23T11:26:52.186016590Z" level=info msg="shim disconnected" id=c5e9d5bd9e83f1433cab12da4bd8bcad41335d8fb6f342d05fcc2f3e12de7ecc namespace=moby
	Sep 23 11:26:52 addons-526200 dockerd[1428]: time="2024-09-23T11:26:52.186174601Z" level=warning msg="cleaning up after shim disconnected" id=c5e9d5bd9e83f1433cab12da4bd8bcad41335d8fb6f342d05fcc2f3e12de7ecc namespace=moby
	Sep 23 11:26:52 addons-526200 dockerd[1428]: time="2024-09-23T11:26:52.186248506Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:26:58 addons-526200 dockerd[1428]: time="2024-09-23T11:26:58.339346113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:26:58 addons-526200 dockerd[1428]: time="2024-09-23T11:26:58.339890251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:26:58 addons-526200 dockerd[1428]: time="2024-09-23T11:26:58.339919253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:26:58 addons-526200 dockerd[1428]: time="2024-09-23T11:26:58.340243476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:26:58 addons-526200 cri-dockerd[1322]: time="2024-09-23T11:26:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/69162cbf726316f74db906032e26ef155dcdda4d14b628ba3310f02b70e5b3be/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 11:26:58 addons-526200 dockerd[1421]: time="2024-09-23T11:26:58.770033534Z" level=warning msg="reference for unknown type: " digest="sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c" remote="ghcr.io/headlamp-k8s/headlamp@sha256:8825bb13459c64dcf9503d836b94b49c97dc831aff7c325a6eed68961388cf9c" spanID=1ef45a154aeb4c96 traceID=ce239f7061d8fcd25e60f39a0d167b28
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                          CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6c0ac1deb8e13       a416a98b71e22                                                                                                  24 seconds ago       Exited              helper-pod                0                   6496d8ab2b1cd       helper-pod-delete-pvc-ec44c691-0529-4f83-b313-a77082d0c7d8
	52dbb2b9d1233       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                38 seconds ago       Exited              busybox                   0                   83733d9e974ac       test-local-path
	85d1321b7eaad       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                    About a minute ago   Running             hello-world-app           0                   aa88c609e3f6a       hello-world-app-55bf9c44b4-5mcvf
	9612197b3ebba       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                  About a minute ago   Running             nginx                     0                   472d02ffffc8d       nginx
	6aa1cc01f5195       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb   11 minutes ago       Running             gcp-auth                  0                   327da568ecece       gcp-auth-89d5ffd79-h7x7h
	e97a7840c560b       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246         13 minutes ago       Running             local-path-provisioner    0                   7cbdf8a2cb5ee       local-path-provisioner-86d989889c-b4gtq
	a0fb96c9fbfc3       6e38f40d628db                                                                                                  15 minutes ago       Running             storage-provisioner       0                   5cad600757649       storage-provisioner
	2f8bc69b9ceaf       c69fa2e9cbf5f                                                                                                  15 minutes ago       Running             coredns                   0                   70a890af24656       coredns-7c65d6cfc9-skqnk
	dc5def149db97       60c005f310ff3                                                                                                  15 minutes ago       Running             kube-proxy                0                   c0003d9dbc609       kube-proxy-t856x
	b9c9f3544f730       6bab7719df100                                                                                                  15 minutes ago       Running             kube-apiserver            0                   cadc75a2346c2       kube-apiserver-addons-526200
	a414247f1ff40       175ffd71cce3d                                                                                                  15 minutes ago       Running             kube-controller-manager   0                   fbfba76e01440       kube-controller-manager-addons-526200
	1bff83b8ae1e4       2e96e5913fc06                                                                                                  15 minutes ago       Running             etcd                      0                   cc99d009b33e0       etcd-addons-526200
	61af6cdd81ade       9aa1fad941575                                                                                                  15 minutes ago       Running             kube-scheduler            0                   71bf465004ce0       kube-scheduler-addons-526200
	
	
	==> coredns [2f8bc69b9cea] <==
	[INFO] 10.244.0.21:53445 - 8461 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000103908s
	[INFO] 10.244.0.21:53445 - 5256 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000104807s
	[INFO] 10.244.0.21:53445 - 33128 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000086406s
	[INFO] 10.244.0.21:53445 - 18746 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000240417s
	[INFO] 10.244.0.21:59323 - 50489 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101907s
	[INFO] 10.244.0.21:59323 - 7510 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062404s
	[INFO] 10.244.0.21:59323 - 22800 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000255418s
	[INFO] 10.244.0.21:59323 - 3317 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064905s
	[INFO] 10.244.0.21:59323 - 55453 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070505s
	[INFO] 10.244.0.21:59323 - 14462 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000181313s
	[INFO] 10.244.0.21:59323 - 49529 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066305s
	[INFO] 10.244.0.21:43131 - 12649 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000110807s
	[INFO] 10.244.0.21:44916 - 39630 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000077005s
	[INFO] 10.244.0.21:44916 - 53071 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00028622s
	[INFO] 10.244.0.21:43131 - 3831 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.001943135s
	[INFO] 10.244.0.21:44916 - 63465 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000186913s
	[INFO] 10.244.0.21:44916 - 1520 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000080006s
	[INFO] 10.244.0.21:43131 - 52576 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000372426s
	[INFO] 10.244.0.21:44916 - 29700 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059504s
	[INFO] 10.244.0.21:43131 - 53178 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000132409s
	[INFO] 10.244.0.21:44916 - 23292 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000095306s
	[INFO] 10.244.0.21:43131 - 64846 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077205s
	[INFO] 10.244.0.21:44916 - 12671 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096307s
	[INFO] 10.244.0.21:43131 - 55400 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064704s
	[INFO] 10.244.0.21:43131 - 54202 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000217415s
	
	
	==> describe nodes <==
	Name:               addons-526200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-526200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-526200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_11_27_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-526200
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:11:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-526200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:26:35 +0000   Mon, 23 Sep 2024 11:11:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:26:35 +0000   Mon, 23 Sep 2024 11:11:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:26:35 +0000   Mon, 23 Sep 2024 11:11:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:26:35 +0000   Mon, 23 Sep 2024 11:11:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.158.244
	  Hostname:    addons-526200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 9fb3118ff1cd4adf945c6ce110dc480c
	  System UUID:                d935663b-9dd8-854c-bc4d-47527f33a382
	  Boot ID:                    78bdb348-ce7f-4edd-8e00-7cdaa221461b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-5mcvf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  gcp-auth                    gcp-auth-89d5ffd79-h7x7h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  headlamp                    headlamp-7b5c95b59d-pnbjk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 coredns-7c65d6cfc9-skqnk                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-526200                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-526200               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-526200      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-t856x                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-526200               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          local-path-provisioner-86d989889c-b4gtq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-526200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-526200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-526200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-526200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-526200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-526200 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-526200 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-526200 event: Registered Node addons-526200 in Controller
	
	
	==> dmesg <==
	[  +7.801158] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.099836] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.625569] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 11:15] kauditd_printk_skb: 40 callbacks suppressed
	[  +7.327000] kauditd_printk_skb: 40 callbacks suppressed
	[Sep23 11:16] kauditd_printk_skb: 2 callbacks suppressed
	[ +31.329317] kauditd_printk_skb: 20 callbacks suppressed
	[ +13.716029] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.830381] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 11:24] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 11:25] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.134010] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.061435] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.241825] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.663490] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.384423] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.152959] kauditd_printk_skb: 7 callbacks suppressed
	[Sep23 11:26] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.978070] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.297279] kauditd_printk_skb: 15 callbacks suppressed
	[  +8.907120] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.205846] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.000646] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.224594] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.863118] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1bff83b8ae1e] <==
	{"level":"warn","ts":"2024-09-23T11:16:01.719272Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T11:16:01.108577Z","time spent":"610.3024ms","remote":"127.0.0.1:35028","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1135,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-09-23T11:16:01.719748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.789386ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:16:01.719793Z","caller":"traceutil/trace.go:171","msg":"trace[1766176408] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1619; }","duration":"291.843389ms","start":"2024-09-23T11:16:01.427938Z","end":"2024-09-23T11:16:01.719782Z","steps":["trace[1766176408] 'range keys from in-memory index tree'  (duration: 291.374861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:16:01.720583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.988709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:16:01.720665Z","caller":"traceutil/trace.go:171","msg":"trace[1730783689] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1619; }","duration":"220.076014ms","start":"2024-09-23T11:16:01.500581Z","end":"2024-09-23T11:16:01.720657Z","steps":["trace[1730783689] 'agreement among raft nodes before linearized reading'  (duration: 219.970108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:16:01.720518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.405258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T11:16:01.722484Z","caller":"traceutil/trace.go:171","msg":"trace[237413184] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:1619; }","duration":"142.373478ms","start":"2024-09-23T11:16:01.580101Z","end":"2024-09-23T11:16:01.722474Z","steps":["trace[237413184] 'agreement among raft nodes before linearized reading'  (duration: 140.387957ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:16:02.069926Z","caller":"traceutil/trace.go:171","msg":"trace[1112141043] transaction","detail":"{read_only:false; response_revision:1620; number_of_response:1; }","duration":"343.136015ms","start":"2024-09-23T11:16:01.726776Z","end":"2024-09-23T11:16:02.069912Z","steps":["trace[1112141043] 'process raft request'  (duration: 342.805895ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:16:02.070164Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T11:16:01.726760Z","time spent":"343.22072ms","remote":"127.0.0.1:35028","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1617 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-23T11:21:22.225967Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1658}
	{"level":"warn","ts":"2024-09-23T11:21:22.538419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.689533ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6660421558960395997 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2240 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T11:21:22.538480Z","caller":"traceutil/trace.go:171","msg":"trace[1532482749] linearizableReadLoop","detail":"{readStateIndex:2395; appliedIndex:2394; }","duration":"105.535723ms","start":"2024-09-23T11:21:22.432934Z","end":"2024-09-23T11:21:22.538470Z","steps":["trace[1532482749] 'read index received'  (duration: 22.502µs)","trace[1532482749] 'applied index is now lower than readState.Index'  (duration: 105.512721ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T11:21:22.538536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.599827ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:21:22.538550Z","caller":"traceutil/trace.go:171","msg":"trace[2094229274] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2244; }","duration":"105.614728ms","start":"2024-09-23T11:21:22.432930Z","end":"2024-09-23T11:21:22.538545Z","steps":["trace[2094229274] 'agreement among raft nodes before linearized reading'  (duration: 105.568225ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:21:22.539355Z","caller":"traceutil/trace.go:171","msg":"trace[941105844] transaction","detail":"{read_only:false; response_revision:2244; number_of_response:1; }","duration":"112.065615ms","start":"2024-09-23T11:21:22.427274Z","end":"2024-09-23T11:21:22.539340Z","steps":["trace[941105844] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; req_size:1095; } (duration: 105.427216ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:21:22.563762Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1658,"took":"336.739475ms","hash":1009420519,"current-db-size-bytes":8945664,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5378048,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2024-09-23T11:21:22.563864Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1009420519,"revision":1658,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T11:25:21.481869Z","caller":"traceutil/trace.go:171","msg":"trace[2037441129] linearizableReadLoop","detail":"{readStateIndex:2798; appliedIndex:2797; }","duration":"188.798338ms","start":"2024-09-23T11:25:21.293049Z","end":"2024-09-23T11:25:21.481847Z","steps":["trace[2037441129] 'read index received'  (duration: 98.159231ms)","trace[2037441129] 'applied index is now lower than readState.Index'  (duration: 90.638307ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T11:25:21.482300Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.163063ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T11:25:21.482430Z","caller":"traceutil/trace.go:171","msg":"trace[53926569] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:2595; }","duration":"189.379078ms","start":"2024-09-23T11:25:21.293016Z","end":"2024-09-23T11:25:21.482395Z","steps":["trace[53926569] 'agreement among raft nodes before linearized reading'  (duration: 188.989551ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:25:21.484377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.807293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T11:25:21.484774Z","caller":"traceutil/trace.go:171","msg":"trace[1509001885] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:2595; }","duration":"105.608649ms","start":"2024-09-23T11:25:21.379080Z","end":"2024-09-23T11:25:21.484688Z","steps":["trace[1509001885] 'agreement among raft nodes before linearized reading'  (duration: 103.600409ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:26:22.243502Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2243}
	{"level":"info","ts":"2024-09-23T11:26:22.280863Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2243,"took":"36.688671ms","hash":1765493264,"current-db-size-bytes":8945664,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4440064,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-23T11:26:22.280921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1765493264,"revision":2243,"compact-revision":1658}
	
	
	==> gcp-auth [6aa1cc01f519] <==
	2024/09/23 11:16:33 Ready to write response ...
	2024/09/23 11:16:33 Ready to marshal response ...
	2024/09/23 11:16:33 Ready to write response ...
	2024/09/23 11:25:15 Ready to marshal response ...
	2024/09/23 11:25:15 Ready to write response ...
	2024/09/23 11:25:25 Ready to marshal response ...
	2024/09/23 11:25:25 Ready to write response ...
	2024/09/23 11:25:30 Ready to marshal response ...
	2024/09/23 11:25:30 Ready to write response ...
	2024/09/23 11:25:37 Ready to marshal response ...
	2024/09/23 11:25:37 Ready to write response ...
	2024/09/23 11:25:47 Ready to marshal response ...
	2024/09/23 11:25:47 Ready to write response ...
	2024/09/23 11:26:15 Ready to marshal response ...
	2024/09/23 11:26:15 Ready to write response ...
	2024/09/23 11:26:15 Ready to marshal response ...
	2024/09/23 11:26:15 Ready to write response ...
	2024/09/23 11:26:36 Ready to marshal response ...
	2024/09/23 11:26:36 Ready to write response ...
	2024/09/23 11:26:57 Ready to marshal response ...
	2024/09/23 11:26:57 Ready to write response ...
	2024/09/23 11:26:57 Ready to marshal response ...
	2024/09/23 11:26:57 Ready to write response ...
	2024/09/23 11:26:57 Ready to marshal response ...
	2024/09/23 11:26:57 Ready to write response ...
	
	
	==> kernel <==
	 11:27:01 up 17 min,  0 users,  load average: 3.28, 1.73, 1.11
	Linux addons-526200 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b9c9f3544f73] <==
	W0923 11:16:24.678792       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 11:16:24.737852       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 11:16:24.777941       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 11:16:25.260371       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 11:16:25.449780       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 11:25:15.112618       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 11:25:15.523127       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.8.169"}
	I0923 11:25:37.620406       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.143.225"}
	I0923 11:25:38.529044       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 11:25:58.708051       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 11:25:59.772788       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 11:26:32.029143       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.029271       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 11:26:32.133506       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.134026       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 11:26:32.203520       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.203851       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 11:26:32.276004       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.276182       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 11:26:32.353194       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 11:26:32.353249       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 11:26:33.276255       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 11:26:33.353193       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 11:26:33.400170       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0923 11:26:57.745773       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.211.146"}
	
	
	==> kube-controller-manager [a414247f1ff4] <==
	E0923 11:26:40.606289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:41.224627       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:41.224787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:42.527957       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:42.528096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 11:26:42.649171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.4µs"
	W0923 11:26:46.537590       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:46.538004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:49.779069       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:49.779143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:50.411534       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:50.411651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:50.911333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:50.911382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 11:26:51.889371       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="6.1µs"
	W0923 11:26:52.984943       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:52.985571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:55.460850       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:55.460895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 11:26:55.658846       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 11:26:55.658903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 11:26:57.872285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="88.267354ms"
	I0923 11:26:57.885209       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="12.827595ms"
	I0923 11:26:57.886263       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="909.963µs"
	I0923 11:26:57.900528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="366.826µs"
	
	
	==> kube-proxy [dc5def149db9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:11:39.296301       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 11:11:39.556339       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.158.244"]
	E0923 11:11:39.556423       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:11:39.923469       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:11:39.923525       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:11:39.923565       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:11:39.963047       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:11:39.964183       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:11:39.964886       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:11:39.976580       1 config.go:199] "Starting service config controller"
	I0923 11:11:39.977227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:11:39.977480       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:11:39.977655       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:11:39.979681       1 config.go:328] "Starting node config controller"
	I0923 11:11:39.991072       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:11:40.078675       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:11:40.078794       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:11:40.091674       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [61af6cdd81ad] <==
	W0923 11:11:24.526590       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:11:24.526867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.559496       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:11:24.559537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.596988       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:11:24.597039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.620113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:11:24.620264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.710560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:11:24.710798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.710595       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:11:24.711108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.715921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 11:11:24.716008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.831732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:11:24.832397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.851831       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:11:24.851857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:24.862185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:11:24.862253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:25.044573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:11:25.045137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:11:25.155040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 11:11:25.155077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 11:11:26.889417       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:26:44 addons-526200 kubelet[2199]: I0923 11:26:44.977183    2199 reconciler_common.go:288] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/750dbf44-39a9-49fa-b3fd-2d026fcd91aa-device-plugin\") on node \"addons-526200\" DevicePath \"\""
	Sep 23 11:26:44 addons-526200 kubelet[2199]: I0923 11:26:44.977401    2199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rcp96\" (UniqueName: \"kubernetes.io/projected/750dbf44-39a9-49fa-b3fd-2d026fcd91aa-kube-api-access-rcp96\") on node \"addons-526200\" DevicePath \"\""
	Sep 23 11:26:45 addons-526200 kubelet[2199]: I0923 11:26:45.768934    2199 scope.go:117] "RemoveContainer" containerID="a25f88dcb3fd3958afa37bd1bdb5fc7e7fdfdba16880397fa60f0288aea1506c"
	Sep 23 11:26:46 addons-526200 kubelet[2199]: I0923 11:26:46.782925    2199 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750dbf44-39a9-49fa-b3fd-2d026fcd91aa" path="/var/lib/kubelet/pods/750dbf44-39a9-49fa-b3fd-2d026fcd91aa/volumes"
	Sep 23 11:26:52 addons-526200 kubelet[2199]: I0923 11:26:52.443492    2199 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8587\" (UniqueName: \"kubernetes.io/projected/4ccb3779-45aa-44dd-a2ce-1d1348f8023e-kube-api-access-k8587\") pod \"4ccb3779-45aa-44dd-a2ce-1d1348f8023e\" (UID: \"4ccb3779-45aa-44dd-a2ce-1d1348f8023e\") "
	Sep 23 11:26:52 addons-526200 kubelet[2199]: I0923 11:26:52.446303    2199 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ccb3779-45aa-44dd-a2ce-1d1348f8023e-kube-api-access-k8587" (OuterVolumeSpecName: "kube-api-access-k8587") pod "4ccb3779-45aa-44dd-a2ce-1d1348f8023e" (UID: "4ccb3779-45aa-44dd-a2ce-1d1348f8023e"). InnerVolumeSpecName "kube-api-access-k8587". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 11:26:52 addons-526200 kubelet[2199]: I0923 11:26:52.545039    2199 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k8587\" (UniqueName: \"kubernetes.io/projected/4ccb3779-45aa-44dd-a2ce-1d1348f8023e-kube-api-access-k8587\") on node \"addons-526200\" DevicePath \"\""
	Sep 23 11:26:52 addons-526200 kubelet[2199]: I0923 11:26:52.891913    2199 scope.go:117] "RemoveContainer" containerID="363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52"
	Sep 23 11:26:52 addons-526200 kubelet[2199]: I0923 11:26:52.928154    2199 scope.go:117] "RemoveContainer" containerID="363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52"
	Sep 23 11:26:52 addons-526200 kubelet[2199]: E0923 11:26:52.929605    2199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52" containerID="363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52"
	Sep 23 11:26:52 addons-526200 kubelet[2199]: I0923 11:26:52.929765    2199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52"} err="failed to get container status \"363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52\": rpc error: code = Unknown desc = Error response from daemon: No such container: 363aa096b69b9c086e8cbea7c43d519eb783918adbfe6908900c526a0e95bc52"
	Sep 23 11:26:54 addons-526200 kubelet[2199]: I0923 11:26:54.782456    2199 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ccb3779-45aa-44dd-a2ce-1d1348f8023e" path="/var/lib/kubelet/pods/4ccb3779-45aa-44dd-a2ce-1d1348f8023e/volumes"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: E0923 11:26:57.852463    2199 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ccb3779-45aa-44dd-a2ce-1d1348f8023e" containerName="cloud-spanner-emulator"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: E0923 11:26:57.852622    2199 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8fba184d-ef78-4494-8f07-f1c7b3232682" containerName="registry-proxy"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: E0923 11:26:57.852635    2199 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56baf2a1-7092-48e6-bb7f-3ff56671ab95" containerName="registry"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: E0923 11:26:57.852644    2199 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="750dbf44-39a9-49fa-b3fd-2d026fcd91aa" containerName="nvidia-device-plugin-ctr"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: E0923 11:26:57.852801    2199 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bea2295-105e-4e40-a070-b917d212a2c6" containerName="helper-pod"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: I0923 11:26:57.853066    2199 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bea2295-105e-4e40-a070-b917d212a2c6" containerName="helper-pod"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: I0923 11:26:57.853244    2199 memory_manager.go:354] "RemoveStaleState removing state" podUID="56baf2a1-7092-48e6-bb7f-3ff56671ab95" containerName="registry"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: I0923 11:26:57.853332    2199 memory_manager.go:354] "RemoveStaleState removing state" podUID="750dbf44-39a9-49fa-b3fd-2d026fcd91aa" containerName="nvidia-device-plugin-ctr"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: I0923 11:26:57.853341    2199 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fba184d-ef78-4494-8f07-f1c7b3232682" containerName="registry-proxy"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: I0923 11:26:57.853348    2199 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ccb3779-45aa-44dd-a2ce-1d1348f8023e" containerName="cloud-spanner-emulator"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: I0923 11:26:57.994885    2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1a0fe8b8-cab1-4be9-8a17-b3cfa19d4949-gcp-creds\") pod \"headlamp-7b5c95b59d-pnbjk\" (UID: \"1a0fe8b8-cab1-4be9-8a17-b3cfa19d4949\") " pod="headlamp/headlamp-7b5c95b59d-pnbjk"
	Sep 23 11:26:57 addons-526200 kubelet[2199]: I0923 11:26:57.994933    2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jhz9\" (UniqueName: \"kubernetes.io/projected/1a0fe8b8-cab1-4be9-8a17-b3cfa19d4949-kube-api-access-8jhz9\") pod \"headlamp-7b5c95b59d-pnbjk\" (UID: \"1a0fe8b8-cab1-4be9-8a17-b3cfa19d4949\") " pod="headlamp/headlamp-7b5c95b59d-pnbjk"
	Sep 23 11:26:59 addons-526200 kubelet[2199]: E0923 11:26:59.794128    2199 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="a29025b7-1625-4a30-a02e-2812cfa81c39"
	
	
	==> storage-provisioner [a0fb96c9fbfc] <==
	I0923 11:12:00.432354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:12:00.452477       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:12:00.452523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:12:00.492947       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:12:00.493112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-526200_167b2aae-3e27-43f7-b2fe-e4f12bf7446e!
	I0923 11:12:00.510974       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75d4430e-1220-4559-b606-773286892633", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-526200_167b2aae-3e27-43f7-b2fe-e4f12bf7446e became leader
	I0923 11:12:00.593627       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-526200_167b2aae-3e27-43f7-b2fe-e4f12bf7446e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-526200 -n addons-526200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-526200 -n addons-526200: (10.453153s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-526200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-526200 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-526200 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-526200/172.19.158.244
	Start Time:       Mon, 23 Sep 2024 11:16:33 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jpnzm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jpnzm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                  From               Message
	  ----     ------          ----                 ----               -------
	  Normal   Scheduled       10m                  default-scheduler  Successfully assigned default/busybox to addons-526200
	  Normal   SandboxChanged  10m                  kubelet            Pod sandbox changed, it will be killed and re-created.
	  Warning  Failed          9m22s (x6 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling         9m9s (x4 over 10m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          9m9s (x4 over 10m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          9m9s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff         29s (x45 over 10m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (119.10s)

                                                
                                    
x
+
TestErrorSpam/setup (173.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-191100 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 --driver=hyperv
E0923 11:30:29.550566    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:29.558560    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:29.571563    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:29.593571    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:29.635572    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:29.718575    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:29.881574    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:30.204616    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:30.847791    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:32.130227    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:34.692988    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:39.815887    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:30:50.059421    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:10.544063    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:31:51.509188    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-191100 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 --driver=hyperv: (2m53.0559817s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-191100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=19690
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-191100" primary control-plane node in "nospam-191100" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-191100" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (173.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (28.91s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700: (10.368501s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs -n 25: (7.2979948s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-191100 --log_dir                                     | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:32 UTC | 23 Sep 24 11:32 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-191100 --log_dir                                     | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:32 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-191100 --log_dir                                     | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-191100 --log_dir                                     | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                     | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                     | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:34 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                     | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:34 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-191100                                            | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:34 UTC |
	| start   | -p functional-877700                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:38 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-877700                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:38 UTC | 23 Sep 24 11:40 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                 | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                 | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                 | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                 | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | minikube-local-cache-test:functional-877700                 |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache delete                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | minikube-local-cache-test:functional-877700                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	| ssh     | functional-877700 ssh sudo                                  | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-877700                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache reload                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	| ssh     | functional-877700 ssh                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-877700 kubectl --                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | --context functional-877700                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:38:02
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:38:02.204612    6944 out.go:345] Setting OutFile to fd 900 ...
	I0923 11:38:02.253917    6944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:38:02.253917    6944 out.go:358] Setting ErrFile to fd 888...
	I0923 11:38:02.253917    6944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:38:02.273162    6944 out.go:352] Setting JSON to false
	I0923 11:38:02.275728    6944 start.go:129] hostinfo: {"hostname":"minikube5","uptime":487458,"bootTime":1726604023,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:38:02.275728    6944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:38:02.279725    6944 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:38:02.283730    6944 notify.go:220] Checking for updates...
	I0923 11:38:02.283730    6944 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:38:02.285463    6944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:38:02.288475    6944 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:38:02.290764    6944 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:38:02.293225    6944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:38:02.295890    6944 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:38:02.295890    6944 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:38:06.970623    6944 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:38:06.974208    6944 start.go:297] selected driver: hyperv
	I0923 11:38:06.974208    6944 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:38:06.974470    6944 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:38:07.014741    6944 cni.go:84] Creating CNI manager for ""
	I0923 11:38:07.014741    6944 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:38:07.015736    6944 start.go:340] cluster config:
	{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:38:07.015736    6944 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:38:07.020644    6944 out.go:177] * Starting "functional-877700" primary control-plane node in "functional-877700" cluster
	I0923 11:38:07.022969    6944 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:38:07.022969    6944 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:38:07.022969    6944 cache.go:56] Caching tarball of preloaded images
	I0923 11:38:07.023473    6944 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:38:07.023616    6944 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:38:07.023808    6944 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\config.json ...
	I0923 11:38:07.025687    6944 start.go:360] acquireMachinesLock for functional-877700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:38:07.025853    6944 start.go:364] duration metric: took 137.1µs to acquireMachinesLock for "functional-877700"
	I0923 11:38:07.026065    6944 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:38:07.026065    6944 fix.go:54] fixHost starting: 
	I0923 11:38:07.026724    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:09.384848    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:09.384917    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:09.384917    6944 fix.go:112] recreateIfNeeded on functional-877700: state=Running err=<nil>
	W0923 11:38:09.384917    6944 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:38:09.388905    6944 out.go:177] * Updating the running hyperv "functional-877700" VM ...
	I0923 11:38:09.391227    6944 machine.go:93] provisionDockerMachine start ...
	I0923 11:38:09.391749    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:11.254375    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:11.254375    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:11.254375    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:13.460003    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:13.460003    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:13.463912    6944 main.go:141] libmachine: Using SSH client type: native
	I0923 11:38:13.464065    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:38:13.464065    6944 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:38:13.592173    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:38:13.592173    6944 buildroot.go:166] provisioning hostname "functional-877700"
	I0923 11:38:13.592173    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:15.427187    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:15.427214    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:15.427275    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:17.632561    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:17.632561    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:17.636577    6944 main.go:141] libmachine: Using SSH client type: native
	I0923 11:38:17.636907    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:38:17.636907    6944 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-877700 && echo "functional-877700" | sudo tee /etc/hostname
	I0923 11:38:17.790424    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:38:17.790494    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:19.639967    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:19.640650    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:19.640920    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:21.836435    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:21.836435    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:21.842903    6944 main.go:141] libmachine: Using SSH client type: native
	I0923 11:38:21.843518    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:38:21.843518    6944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-877700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-877700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-877700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:38:21.976081    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:38:21.976081    6944 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 11:38:21.976081    6944 buildroot.go:174] setting up certificates
	I0923 11:38:21.976081    6944 provision.go:84] configureAuth start
	I0923 11:38:21.976263    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:23.839490    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:23.839490    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:23.839490    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:26.048760    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:26.048921    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:26.048921    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:27.910936    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:27.911361    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:27.911408    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:30.094258    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:30.094258    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:30.094331    6944 provision.go:143] copyHostCerts
	I0923 11:38:30.094419    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 11:38:30.094596    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 11:38:30.094671    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 11:38:30.094936    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 11:38:30.095495    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 11:38:30.095495    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 11:38:30.095495    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 11:38:30.096147    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:38:30.096498    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 11:38:30.097179    6944 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 11:38:30.097179    6944 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 11:38:30.097467    6944 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 11:38:30.098293    6944 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-877700 san=[127.0.0.1 172.19.157.210 functional-877700 localhost minikube]
	I0923 11:38:30.505550    6944 provision.go:177] copyRemoteCerts
	I0923 11:38:30.513976    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:38:30.514140    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:32.362831    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:32.362831    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:32.362831    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:34.567302    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:34.567302    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:34.567833    6944 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:38:34.676086    6944 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1616984s)
	I0923 11:38:34.676136    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 11:38:34.676285    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:38:34.716876    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 11:38:34.717284    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 11:38:34.765679    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 11:38:34.766015    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:38:34.811514    6944 provision.go:87] duration metric: took 12.8345668s to configureAuth
	I0923 11:38:34.811514    6944 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:38:34.811851    6944 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:38:34.812007    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:36.647394    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:36.647394    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:36.648072    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:38.840896    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:38.840896    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:38.845733    6944 main.go:141] libmachine: Using SSH client type: native
	I0923 11:38:38.846430    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:38:38.846430    6944 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:38:38.980146    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 11:38:38.980146    6944 buildroot.go:70] root file system type: tmpfs
	I0923 11:38:38.980146    6944 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:38:38.980146    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:40.860833    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:40.860833    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:40.860991    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:43.060310    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:43.060310    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:43.068742    6944 main.go:141] libmachine: Using SSH client type: native
	I0923 11:38:43.069687    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:38:43.069687    6944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:38:43.236394    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:38:43.236394    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:45.109470    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:45.109470    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:45.109900    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:47.332013    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:47.332013    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:47.336193    6944 main.go:141] libmachine: Using SSH client type: native
	I0923 11:38:47.336694    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:38:47.336756    6944 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:38:47.482556    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:38:47.482641    6944 machine.go:96] duration metric: took 38.088321s to provisionDockerMachine
	I0923 11:38:47.482673    6944 start.go:293] postStartSetup for "functional-877700" (driver="hyperv")
	I0923 11:38:47.482748    6944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:38:47.491788    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:38:47.491788    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:49.315041    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:49.315041    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:49.315671    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:51.570426    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:51.570426    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:51.570426    6944 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:38:51.677535    6944 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1852999s)
	I0923 11:38:51.687825    6944 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:38:51.694312    6944 command_runner.go:130] > NAME=Buildroot
	I0923 11:38:51.694312    6944 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 11:38:51.694312    6944 command_runner.go:130] > ID=buildroot
	I0923 11:38:51.694312    6944 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 11:38:51.694312    6944 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 11:38:51.694312    6944 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:38:51.694312    6944 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 11:38:51.694845    6944 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 11:38:51.695130    6944 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 11:38:51.695720    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 11:38:51.696339    6944 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts -> hosts in /etc/test/nested/copy/3844
	I0923 11:38:51.696339    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts -> /etc/test/nested/copy/3844/hosts
	I0923 11:38:51.705606    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3844
	I0923 11:38:51.723659    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 11:38:51.764782    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts --> /etc/test/nested/copy/3844/hosts (40 bytes)
	I0923 11:38:51.808939    6944 start.go:296] duration metric: took 4.3259309s for postStartSetup
	I0923 11:38:51.809085    6944 fix.go:56] duration metric: took 44.7799963s for fixHost
	I0923 11:38:51.809234    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:53.683612    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:53.684327    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:53.684327    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:38:55.873610    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:38:55.873610    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:55.877354    6944 main.go:141] libmachine: Using SSH client type: native
	I0923 11:38:55.877855    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:38:55.877855    6944 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:38:56.003169    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091536.228931059
	
	I0923 11:38:56.003169    6944 fix.go:216] guest clock: 1727091536.228931059
	I0923 11:38:56.003307    6944 fix.go:229] Guest: 2024-09-23 11:38:56.228931059 +0000 UTC Remote: 2024-09-23 11:38:51.809189 +0000 UTC m=+49.677337001 (delta=4.419742059s)
	I0923 11:38:56.003543    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:38:57.858972    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:38:57.858972    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:38:57.859253    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:39:00.065204    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:39:00.065204    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:00.068716    6944 main.go:141] libmachine: Using SSH client type: native
	I0923 11:39:00.069118    6944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:39:00.069209    6944 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727091536
	I0923 11:39:00.205010    6944 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 11:38:56 UTC 2024
	
	I0923 11:39:00.205541    6944 fix.go:236] clock set: Mon Sep 23 11:38:56 UTC 2024
	 (err=<nil>)
	I0923 11:39:00.205541    6944 start.go:83] releasing machines lock for "functional-877700", held for 53.1760625s
	I0923 11:39:00.205768    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:39:02.101426    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:39:02.102148    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:02.102148    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:39:04.330449    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:39:04.330449    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:04.333676    6944 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:39:04.333676    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:39:04.341251    6944 ssh_runner.go:195] Run: cat /version.json
	I0923 11:39:04.341251    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:39:06.247173    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:39:06.248028    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:06.248028    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:39:06.250864    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:39:06.250864    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:06.250864    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:39:08.569614    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:39:08.569614    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:08.570413    6944 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:39:08.592938    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:39:08.592938    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:08.593549    6944 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:39:08.664226    6944 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 11:39:08.664529    6944 ssh_runner.go:235] Completed: cat /version.json: (4.3229856s)
	I0923 11:39:08.674989    6944 ssh_runner.go:195] Run: systemctl --version
	I0923 11:39:08.679452    6944 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 11:39:08.679949    6944 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3459792s)
	W0923 11:39:08.680056    6944 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:39:08.688173    6944 command_runner.go:130] > systemd 252 (252)
	I0923 11:39:08.688173    6944 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 11:39:08.696597    6944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:39:08.704943    6944 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 11:39:08.706321    6944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:39:08.716946    6944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:39:08.734382    6944 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:39:08.734382    6944 start.go:495] detecting cgroup driver to use...
	I0923 11:39:08.734669    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:39:08.765011    6944 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 11:39:08.775668    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0923 11:39:08.796646    6944 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 11:39:08.796791    6944 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:39:08.806483    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:39:08.827485    6944 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:39:08.835485    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:39:08.866224    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:39:08.895732    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:39:08.925720    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:39:08.958395    6944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:39:08.988062    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:39:09.018269    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:39:09.052759    6944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:39:09.086378    6944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:39:09.104280    6944 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0923 11:39:09.113057    6944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:39:09.138061    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:39:09.384333    6944 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:39:09.412692    6944 start.go:495] detecting cgroup driver to use...
	I0923 11:39:09.422583    6944 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:39:09.443593    6944 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 11:39:09.443593    6944 command_runner.go:130] > [Unit]
	I0923 11:39:09.443593    6944 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 11:39:09.443593    6944 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 11:39:09.443593    6944 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 11:39:09.443593    6944 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 11:39:09.443593    6944 command_runner.go:130] > StartLimitBurst=3
	I0923 11:39:09.444609    6944 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 11:39:09.444609    6944 command_runner.go:130] > [Service]
	I0923 11:39:09.444609    6944 command_runner.go:130] > Type=notify
	I0923 11:39:09.444609    6944 command_runner.go:130] > Restart=on-failure
	I0923 11:39:09.444609    6944 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 11:39:09.444609    6944 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 11:39:09.444609    6944 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 11:39:09.444609    6944 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 11:39:09.444609    6944 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 11:39:09.444609    6944 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 11:39:09.444609    6944 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 11:39:09.444609    6944 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 11:39:09.444609    6944 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 11:39:09.444609    6944 command_runner.go:130] > ExecStart=
	I0923 11:39:09.444609    6944 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 11:39:09.444609    6944 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 11:39:09.444609    6944 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 11:39:09.444609    6944 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 11:39:09.444609    6944 command_runner.go:130] > LimitNOFILE=infinity
	I0923 11:39:09.444609    6944 command_runner.go:130] > LimitNPROC=infinity
	I0923 11:39:09.444609    6944 command_runner.go:130] > LimitCORE=infinity
	I0923 11:39:09.444609    6944 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 11:39:09.444609    6944 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 11:39:09.444609    6944 command_runner.go:130] > TasksMax=infinity
	I0923 11:39:09.444609    6944 command_runner.go:130] > TimeoutStartSec=0
	I0923 11:39:09.444609    6944 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 11:39:09.444609    6944 command_runner.go:130] > Delegate=yes
	I0923 11:39:09.444609    6944 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 11:39:09.444609    6944 command_runner.go:130] > KillMode=process
	I0923 11:39:09.444609    6944 command_runner.go:130] > [Install]
	I0923 11:39:09.444609    6944 command_runner.go:130] > WantedBy=multi-user.target
	I0923 11:39:09.459596    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:39:09.496578    6944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:39:09.533192    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:39:09.563297    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:39:09.587083    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:39:09.619098    6944 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 11:39:09.627084    6944 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:39:09.633327    6944 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 11:39:09.640924    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:39:09.655964    6944 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:39:09.695958    6944 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:39:09.942434    6944 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:39:10.179308    6944 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:39:10.179673    6944 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:39:10.225012    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:39:10.446619    6944 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:39:23.362870    6944 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.9153793s)
	I0923 11:39:23.374205    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 11:39:23.406664    6944 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 11:39:23.455736    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:39:23.491549    6944 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 11:39:23.701575    6944 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 11:39:23.886400    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:39:24.055779    6944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 11:39:24.094005    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:39:24.123782    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:39:24.313772    6944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 11:39:24.419855    6944 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 11:39:24.429664    6944 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 11:39:24.436593    6944 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 11:39:24.436593    6944 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 11:39:24.437011    6944 command_runner.go:130] > Device: 0,22	Inode: 1510        Links: 1
	I0923 11:39:24.437011    6944 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 11:39:24.437011    6944 command_runner.go:130] > Access: 2024-09-23 11:39:24.566492453 +0000
	I0923 11:39:24.437089    6944 command_runner.go:130] > Modify: 2024-09-23 11:39:24.547490143 +0000
	I0923 11:39:24.437089    6944 command_runner.go:130] > Change: 2024-09-23 11:39:24.550490508 +0000
	I0923 11:39:24.437089    6944 command_runner.go:130] >  Birth: -
	I0923 11:39:24.437223    6944 start.go:563] Will wait 60s for crictl version
	I0923 11:39:24.449471    6944 ssh_runner.go:195] Run: which crictl
	I0923 11:39:24.454723    6944 command_runner.go:130] > /usr/bin/crictl
	I0923 11:39:24.462725    6944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:39:24.562409    6944 command_runner.go:130] > Version:  0.1.0
	I0923 11:39:24.562409    6944 command_runner.go:130] > RuntimeName:  docker
	I0923 11:39:24.562409    6944 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 11:39:24.562409    6944 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 11:39:24.562546    6944 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 11:39:24.570338    6944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:39:24.625918    6944 command_runner.go:130] > 27.3.0
	I0923 11:39:24.633990    6944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:39:24.688802    6944 command_runner.go:130] > 27.3.0
	I0923 11:39:24.693935    6944 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 11:39:24.694059    6944 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 11:39:24.697308    6944 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 11:39:24.697308    6944 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 11:39:24.697308    6944 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 11:39:24.697308    6944 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 11:39:24.699933    6944 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 11:39:24.699933    6944 ip.go:214] interface addr: 172.19.144.1/20
	I0923 11:39:24.708914    6944 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 11:39:24.714924    6944 command_runner.go:130] > 172.19.144.1	host.minikube.internal
	I0923 11:39:24.715574    6944 kubeadm.go:883] updating cluster {Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:39:24.715777    6944 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:39:24.721931    6944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:39:24.754180    6944 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 11:39:24.754180    6944 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 11:39:24.754180    6944 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 11:39:24.754180    6944 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 11:39:24.754180    6944 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 11:39:24.754180    6944 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 11:39:24.754180    6944 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 11:39:24.755046    6944 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:39:24.755097    6944 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 11:39:24.755130    6944 docker.go:615] Images already preloaded, skipping extraction
	I0923 11:39:24.763054    6944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:39:24.799073    6944 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 11:39:24.799073    6944 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 11:39:24.799073    6944 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 11:39:24.799073    6944 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 11:39:24.799073    6944 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 11:39:24.799073    6944 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 11:39:24.799663    6944 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 11:39:24.799663    6944 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:39:24.799726    6944 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 11:39:24.799799    6944 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:39:24.799799    6944 kubeadm.go:934] updating node { 172.19.157.210 8441 v1.31.1 docker true true} ...
	I0923 11:39:24.799992    6944 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-877700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.157.210
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:39:24.806942    6944 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 11:39:24.909685    6944 command_runner.go:130] > cgroupfs
	I0923 11:39:24.909685    6944 cni.go:84] Creating CNI manager for ""
	I0923 11:39:24.909685    6944 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:39:24.909685    6944 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:39:24.909685    6944 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.157.210 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-877700 NodeName:functional-877700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.157.210"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.157.210 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:39:24.909685    6944 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.157.210
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-877700"
	  kubeletExtraArgs:
	    node-ip: 172.19.157.210
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.157.210"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:39:24.925740    6944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:39:24.948716    6944 command_runner.go:130] > kubeadm
	I0923 11:39:24.948833    6944 command_runner.go:130] > kubectl
	I0923 11:39:24.948833    6944 command_runner.go:130] > kubelet
	I0923 11:39:24.948833    6944 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:39:24.957519    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:39:24.984555    6944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0923 11:39:25.049927    6944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:39:25.078209    6944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0923 11:39:25.149500    6944 ssh_runner.go:195] Run: grep 172.19.157.210	control-plane.minikube.internal$ /etc/hosts
	I0923 11:39:25.165667    6944 command_runner.go:130] > 172.19.157.210	control-plane.minikube.internal
	I0923 11:39:25.179269    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:39:25.489670    6944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:39:25.511431    6944 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700 for IP: 172.19.157.210
	I0923 11:39:25.511431    6944 certs.go:194] generating shared ca certs ...
	I0923 11:39:25.511488    6944 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:39:25.512265    6944 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 11:39:25.512630    6944 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 11:39:25.512823    6944 certs.go:256] generating profile certs ...
	I0923 11:39:25.513430    6944 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\client.key
	I0923 11:39:25.513484    6944 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\apiserver.key.d06a0a8e
	I0923 11:39:25.513484    6944 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\proxy-client.key
	I0923 11:39:25.513484    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 11:39:25.513484    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 11:39:25.514033    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 11:39:25.514176    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 11:39:25.514302    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 11:39:25.514435    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 11:39:25.514563    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 11:39:25.514639    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 11:39:25.515008    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 11:39:25.515322    6944 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 11:39:25.515373    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 11:39:25.515591    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 11:39:25.515923    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 11:39:25.516132    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 11:39:25.516522    6944 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 11:39:25.516717    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:39:25.516846    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 11:39:25.517018    6944 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 11:39:25.518318    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:39:25.574190    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 11:39:25.645152    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:39:25.711445    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 11:39:25.770835    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 11:39:25.834524    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 11:39:25.952759    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:39:26.048896    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 11:39:26.112689    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:39:26.180676    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 11:39:26.266481    6944 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 11:39:26.327992    6944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:39:26.398913    6944 ssh_runner.go:195] Run: openssl version
	I0923 11:39:26.409183    6944 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 11:39:26.419676    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 11:39:26.446629    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 11:39:26.456256    6944 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 11:39:26.456256    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 11:39:26.468204    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 11:39:26.482286    6944 command_runner.go:130] > 3ec20f2e
	I0923 11:39:26.493865    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:39:26.533202    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:39:26.572066    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:39:26.580213    6944 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:39:26.580399    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:39:26.589787    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:39:26.601227    6944 command_runner.go:130] > b5213941
	I0923 11:39:26.609980    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:39:26.665799    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 11:39:26.705698    6944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 11:39:26.712788    6944 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 11:39:26.712914    6944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 11:39:26.724778    6944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 11:39:26.738587    6944 command_runner.go:130] > 51391683
	I0923 11:39:26.746145    6944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 11:39:26.773530    6944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:39:26.779883    6944 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:39:26.779953    6944 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 11:39:26.779953    6944 command_runner.go:130] > Device: 8,1	Inode: 9429283     Links: 1
	I0923 11:39:26.779953    6944 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 11:39:26.779953    6944 command_runner.go:130] > Access: 2024-09-23 11:37:03.719583822 +0000
	I0923 11:39:26.779953    6944 command_runner.go:130] > Modify: 2024-09-23 11:37:03.719583822 +0000
	I0923 11:39:26.780009    6944 command_runner.go:130] > Change: 2024-09-23 11:37:03.719583822 +0000
	I0923 11:39:26.780009    6944 command_runner.go:130] >  Birth: 2024-09-23 11:37:03.719583822 +0000
	I0923 11:39:26.788165    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:39:26.796633    6944 command_runner.go:130] > Certificate will not expire
	I0923 11:39:26.806404    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:39:26.815744    6944 command_runner.go:130] > Certificate will not expire
	I0923 11:39:26.826137    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:39:26.837650    6944 command_runner.go:130] > Certificate will not expire
	I0923 11:39:26.846039    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:39:26.860791    6944 command_runner.go:130] > Certificate will not expire
	I0923 11:39:26.871934    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:39:26.889066    6944 command_runner.go:130] > Certificate will not expire
	I0923 11:39:26.897587    6944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:39:26.908259    6944 command_runner.go:130] > Certificate will not expire
	I0923 11:39:26.908799    6944 kubeadm.go:392] StartCluster: {Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:39:26.921364    6944 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 11:39:26.959932    6944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:39:26.977005    6944 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0923 11:39:26.977005    6944 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0923 11:39:26.977005    6944 command_runner.go:130] > /var/lib/minikube/etcd:
	I0923 11:39:26.977005    6944 command_runner.go:130] > member
	I0923 11:39:26.978835    6944 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 11:39:26.978950    6944 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 11:39:26.990089    6944 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 11:39:27.004509    6944 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 11:39:27.006211    6944 kubeconfig.go:125] found "functional-877700" server: "https://172.19.157.210:8441"
	I0923 11:39:27.008275    6944 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:39:27.009211    6944 kapi.go:59] client config for functional-877700: &rest.Config{Host:"https://172.19.157.210:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 11:39:27.011196    6944 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 11:39:27.020993    6944 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 11:39:27.035736    6944 kubeadm.go:630] The running cluster does not require reconfiguration: 172.19.157.210
	I0923 11:39:27.035927    6944 kubeadm.go:1160] stopping kube-system containers ...
	I0923 11:39:27.042629    6944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 11:39:27.097292    6944 command_runner.go:130] > 6c5cbfe07adf
	I0923 11:39:27.097354    6944 command_runner.go:130] > 5cdb3588e916
	I0923 11:39:27.097354    6944 command_runner.go:130] > 9b83feae4011
	I0923 11:39:27.097354    6944 command_runner.go:130] > f0fdd3b0500a
	I0923 11:39:27.097354    6944 command_runner.go:130] > 7c21f80b1432
	I0923 11:39:27.097354    6944 command_runner.go:130] > 4cd7dfae51ea
	I0923 11:39:27.097354    6944 command_runner.go:130] > f338105492d6
	I0923 11:39:27.097354    6944 command_runner.go:130] > 94ebe68eaa34
	I0923 11:39:27.097354    6944 command_runner.go:130] > 62acba478724
	I0923 11:39:27.097354    6944 command_runner.go:130] > b2e0af0c3256
	I0923 11:39:27.097415    6944 command_runner.go:130] > 2ce685dbaa7f
	I0923 11:39:27.097415    6944 command_runner.go:130] > 9593a0bf03ca
	I0923 11:39:27.097415    6944 command_runner.go:130] > 8aeae890b2a3
	I0923 11:39:27.097415    6944 command_runner.go:130] > be0da446957c
	I0923 11:39:27.097415    6944 command_runner.go:130] > 14d205533a2b
	I0923 11:39:27.097415    6944 command_runner.go:130] > fa882d59aaf7
	I0923 11:39:27.097537    6944 command_runner.go:130] > 7f27ce21cc9a
	I0923 11:39:27.097537    6944 command_runner.go:130] > 86498544573d
	I0923 11:39:27.097537    6944 command_runner.go:130] > a309e060ac61
	I0923 11:39:27.097537    6944 command_runner.go:130] > 99bd9defd281
	I0923 11:39:27.097537    6944 command_runner.go:130] > f16ac040529f
	I0923 11:39:27.097537    6944 command_runner.go:130] > 8315b33ac875
	I0923 11:39:27.097537    6944 command_runner.go:130] > 2f4c688acdf7
	I0923 11:39:27.097537    6944 command_runner.go:130] > cc1a8c14f137
	I0923 11:39:27.097537    6944 command_runner.go:130] > 0991f143c31e
	I0923 11:39:27.097537    6944 command_runner.go:130] > 53b80274c7f7
	I0923 11:39:27.097537    6944 command_runner.go:130] > 023338df5e0b
	I0923 11:39:27.097537    6944 command_runner.go:130] > b3b1c0d74fa8
	I0923 11:39:27.097623    6944 docker.go:483] Stopping containers: [6c5cbfe07adf 5cdb3588e916 9b83feae4011 f0fdd3b0500a 7c21f80b1432 4cd7dfae51ea f338105492d6 94ebe68eaa34 62acba478724 b2e0af0c3256 2ce685dbaa7f 9593a0bf03ca 8aeae890b2a3 be0da446957c 14d205533a2b fa882d59aaf7 7f27ce21cc9a 86498544573d a309e060ac61 99bd9defd281 f16ac040529f 8315b33ac875 2f4c688acdf7 cc1a8c14f137 0991f143c31e 53b80274c7f7 023338df5e0b b3b1c0d74fa8]
	I0923 11:39:27.107026    6944 ssh_runner.go:195] Run: docker stop 6c5cbfe07adf 5cdb3588e916 9b83feae4011 f0fdd3b0500a 7c21f80b1432 4cd7dfae51ea f338105492d6 94ebe68eaa34 62acba478724 b2e0af0c3256 2ce685dbaa7f 9593a0bf03ca 8aeae890b2a3 be0da446957c 14d205533a2b fa882d59aaf7 7f27ce21cc9a 86498544573d a309e060ac61 99bd9defd281 f16ac040529f 8315b33ac875 2f4c688acdf7 cc1a8c14f137 0991f143c31e 53b80274c7f7 023338df5e0b b3b1c0d74fa8
	I0923 11:39:37.185551    6944 command_runner.go:130] > 6c5cbfe07adf
	I0923 11:39:37.185551    6944 command_runner.go:130] > 5cdb3588e916
	I0923 11:39:37.185662    6944 command_runner.go:130] > 9b83feae4011
	I0923 11:39:37.185662    6944 command_runner.go:130] > f0fdd3b0500a
	I0923 11:39:37.185662    6944 command_runner.go:130] > 7c21f80b1432
	I0923 11:39:37.185662    6944 command_runner.go:130] > 4cd7dfae51ea
	I0923 11:39:37.185662    6944 command_runner.go:130] > f338105492d6
	I0923 11:39:37.185662    6944 command_runner.go:130] > 94ebe68eaa34
	I0923 11:39:37.185662    6944 command_runner.go:130] > 62acba478724
	I0923 11:39:37.185662    6944 command_runner.go:130] > b2e0af0c3256
	I0923 11:39:37.185662    6944 command_runner.go:130] > 2ce685dbaa7f
	I0923 11:39:37.185662    6944 command_runner.go:130] > 9593a0bf03ca
	I0923 11:39:37.185662    6944 command_runner.go:130] > 8aeae890b2a3
	I0923 11:39:37.185662    6944 command_runner.go:130] > be0da446957c
	I0923 11:39:37.185662    6944 command_runner.go:130] > 14d205533a2b
	I0923 11:39:37.185748    6944 command_runner.go:130] > fa882d59aaf7
	I0923 11:39:37.185748    6944 command_runner.go:130] > 7f27ce21cc9a
	I0923 11:39:37.185748    6944 command_runner.go:130] > 86498544573d
	I0923 11:39:37.185748    6944 command_runner.go:130] > a309e060ac61
	I0923 11:39:37.185748    6944 command_runner.go:130] > 99bd9defd281
	I0923 11:39:37.185748    6944 command_runner.go:130] > f16ac040529f
	I0923 11:39:37.185748    6944 command_runner.go:130] > 8315b33ac875
	I0923 11:39:37.185748    6944 command_runner.go:130] > 2f4c688acdf7
	I0923 11:39:37.185748    6944 command_runner.go:130] > cc1a8c14f137
	I0923 11:39:37.185748    6944 command_runner.go:130] > 0991f143c31e
	I0923 11:39:37.185748    6944 command_runner.go:130] > 53b80274c7f7
	I0923 11:39:37.185824    6944 command_runner.go:130] > 023338df5e0b
	I0923 11:39:37.185824    6944 command_runner.go:130] > b3b1c0d74fa8
	I0923 11:39:37.185824    6944 ssh_runner.go:235] Completed: docker stop 6c5cbfe07adf 5cdb3588e916 9b83feae4011 f0fdd3b0500a 7c21f80b1432 4cd7dfae51ea f338105492d6 94ebe68eaa34 62acba478724 b2e0af0c3256 2ce685dbaa7f 9593a0bf03ca 8aeae890b2a3 be0da446957c 14d205533a2b fa882d59aaf7 7f27ce21cc9a 86498544573d a309e060ac61 99bd9defd281 f16ac040529f 8315b33ac875 2f4c688acdf7 cc1a8c14f137 0991f143c31e 53b80274c7f7 023338df5e0b b3b1c0d74fa8: (10.0727942s)
	I0923 11:39:37.194199    6944 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 11:39:37.263385    6944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:39:37.279855    6944 command_runner.go:130] > -rw------- 1 root root 5647 Sep 23 11:37 /etc/kubernetes/admin.conf
	I0923 11:39:37.279855    6944 command_runner.go:130] > -rw------- 1 root root 5658 Sep 23 11:37 /etc/kubernetes/controller-manager.conf
	I0923 11:39:37.279855    6944 command_runner.go:130] > -rw------- 1 root root 2007 Sep 23 11:37 /etc/kubernetes/kubelet.conf
	I0923 11:39:37.279855    6944 command_runner.go:130] > -rw------- 1 root root 5602 Sep 23 11:37 /etc/kubernetes/scheduler.conf
	I0923 11:39:37.279855    6944 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Sep 23 11:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Sep 23 11:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 23 11:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Sep 23 11:37 /etc/kubernetes/scheduler.conf
	
	I0923 11:39:37.289523    6944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0923 11:39:37.306570    6944 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0923 11:39:37.314515    6944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0923 11:39:37.328942    6944 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0923 11:39:37.337022    6944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0923 11:39:37.352398    6944 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 11:39:37.360535    6944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:39:37.384041    6944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0923 11:39:37.399356    6944 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0923 11:39:37.407915    6944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:39:37.435791    6944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:39:37.451852    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:39:37.513419    6944 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:39:37.513967    6944 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0923 11:39:37.514461    6944 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0923 11:39:37.514806    6944 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 11:39:37.515035    6944 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0923 11:39:37.515345    6944 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0923 11:39:37.515871    6944 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0923 11:39:37.516072    6944 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0923 11:39:37.516590    6944 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0923 11:39:37.516686    6944 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 11:39:37.517058    6944 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 11:39:37.517328    6944 command_runner.go:130] > [certs] Using the existing "sa" key
	I0923 11:39:37.523224    6944 command_runner.go:130] ! W0923 11:39:37.724889    6087 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:37.523224    6944 command_runner.go:130] ! W0923 11:39:37.726807    6087 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:37.523224    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:39:37.578014    6944 command_runner.go:130] ! W0923 11:39:37.793265    6092 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:37.578918    6944 command_runner.go:130] ! W0923 11:39:37.794537    6092 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:38.818135    6944 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:39:38.818191    6944 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0923 11:39:38.818191    6944 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0923 11:39:38.818254    6944 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0923 11:39:38.818254    6944 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:39:38.818254    6944 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:39:38.818292    6944 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2949806s)
	I0923 11:39:38.818292    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:39:38.901574    6944 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:39:38.911557    6944 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:39:38.911557    6944 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 11:39:39.117131    6944 command_runner.go:130] ! W0923 11:39:39.091291    6096 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:39.117366    6944 command_runner.go:130] ! W0923 11:39:39.092116    6096 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:39.117366    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:39:39.185500    6944 command_runner.go:130] ! W0923 11:39:39.397995    6124 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:39.188807    6944 command_runner.go:130] ! W0923 11:39:39.403842    6124 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:39.199011    6944 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:39:39.199011    6944 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:39:39.199011    6944 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:39:39.199011    6944 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:39:39.199011    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:39:39.253693    6944 command_runner.go:130] ! W0923 11:39:39.469102    6130 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:39.263052    6944 command_runner.go:130] ! W0923 11:39:39.478450    6130 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:39.273908    6944 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:39:39.274021    6944 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:39:39.283441    6944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:39:39.786441    6944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:39:40.284398    6944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:39:40.314882    6944 command_runner.go:130] > 6284
	I0923 11:39:40.314882    6944 api_server.go:72] duration metric: took 1.0408233s to wait for apiserver process to appear ...
	I0923 11:39:40.314882    6944 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:39:40.314882    6944 api_server.go:253] Checking apiserver healthz at https://172.19.157.210:8441/healthz ...
	I0923 11:39:43.021428    6944 api_server.go:279] https://172.19.157.210:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 11:39:43.021552    6944 api_server.go:103] status: https://172.19.157.210:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 11:39:43.021588    6944 api_server.go:253] Checking apiserver healthz at https://172.19.157.210:8441/healthz ...
	I0923 11:39:43.116168    6944 api_server.go:279] https://172.19.157.210:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 11:39:43.116325    6944 api_server.go:103] status: https://172.19.157.210:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 11:39:43.315813    6944 api_server.go:253] Checking apiserver healthz at https://172.19.157.210:8441/healthz ...
	I0923 11:39:43.331635    6944 api_server.go:279] https://172.19.157.210:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 11:39:43.331684    6944 api_server.go:103] status: https://172.19.157.210:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 11:39:43.815953    6944 api_server.go:253] Checking apiserver healthz at https://172.19.157.210:8441/healthz ...
	I0923 11:39:43.823709    6944 api_server.go:279] https://172.19.157.210:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 11:39:43.823809    6944 api_server.go:103] status: https://172.19.157.210:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 11:39:44.315291    6944 api_server.go:253] Checking apiserver healthz at https://172.19.157.210:8441/healthz ...
	I0923 11:39:44.323216    6944 api_server.go:279] https://172.19.157.210:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 11:39:44.323402    6944 api_server.go:103] status: https://172.19.157.210:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 11:39:44.816223    6944 api_server.go:253] Checking apiserver healthz at https://172.19.157.210:8441/healthz ...
	I0923 11:39:44.827049    6944 api_server.go:279] https://172.19.157.210:8441/healthz returned 200:
	ok
	I0923 11:39:44.827329    6944 round_trippers.go:463] GET https://172.19.157.210:8441/version
	I0923 11:39:44.827412    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:44.827412    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:44.827490    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:44.836724    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 11:39:44.836724    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:44.836724    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:44.836724    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:44.836724    6944 round_trippers.go:580]     Content-Length: 263
	I0923 11:39:44.836724    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:44.836724    6944 round_trippers.go:580]     Audit-Id: 575c1fa4-45eb-458f-920b-314cc9164eca
	I0923 11:39:44.836724    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:44.836724    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:44.836724    6944 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 11:39:44.836724    6944 api_server.go:141] control plane version: v1.31.1
	I0923 11:39:44.836724    6944 api_server.go:131] duration metric: took 4.5215367s to wait for apiserver health ...
	I0923 11:39:44.837258    6944 cni.go:84] Creating CNI manager for ""
	I0923 11:39:44.837258    6944 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:39:44.841552    6944 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 11:39:44.851689    6944 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 11:39:44.869264    6944 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 11:39:44.898346    6944 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:39:44.899352    6944 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 11:39:44.899472    6944 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 11:39:44.899559    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods
	I0923 11:39:44.899559    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:44.899559    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:44.899648    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:44.907474    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 11:39:44.907613    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:44.907613    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:44.907613    6944 round_trippers.go:580]     Audit-Id: 550701ea-3de7-4f41-acad-8a224ceb6040
	I0923 11:39:44.907613    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:44.907613    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:44.907613    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:44.907613    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:44.908735    6944 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52213 chars]
	I0923 11:39:44.914752    6944 system_pods.go:59] 7 kube-system pods found
	I0923 11:39:44.915274    6944 system_pods.go:61] "coredns-7c65d6cfc9-68rgs" [207034a8-50d8-43ec-b01c-2e0a29efdc66] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 11:39:44.915274    6944 system_pods.go:61] "etcd-functional-877700" [517286c0-c0d8-40d8-8952-8002342551dd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0923 11:39:44.915274    6944 system_pods.go:61] "kube-apiserver-functional-877700" [8a3ca5dc-4459-41b9-bd5a-c2a82a2224c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 11:39:44.915274    6944 system_pods.go:61] "kube-controller-manager-functional-877700" [cf271775-be5e-4d15-91cf-0284cdcbe3fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 11:39:44.915392    6944 system_pods.go:61] "kube-proxy-njj9d" [47a01996-aa9d-45b6-90ef-e93fa6bff34b] Running
	I0923 11:39:44.915392    6944 system_pods.go:61] "kube-scheduler-functional-877700" [99b899a7-2a5d-4cfe-a751-8c80b7f4a01c] Running
	I0923 11:39:44.915392    6944 system_pods.go:61] "storage-provisioner" [c5b8b930-03ac-48c2-ab92-e2d2d5d396e4] Running
	I0923 11:39:44.915392    6944 system_pods.go:74] duration metric: took 17.0457ms to wait for pod list to return data ...
	I0923 11:39:44.915392    6944 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:39:44.915392    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes
	I0923 11:39:44.915392    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:44.915392    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:44.915392    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:44.919575    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:44.919575    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:44.919575    6944 round_trippers.go:580]     Audit-Id: 8a967d2e-0b6c-48db-a260-c3afe2faa9b1
	I0923 11:39:44.919575    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:44.919575    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:44.919575    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:44.919575    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:44.919575    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:44.920106    6944 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"574"},"items":[{"metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0923 11:39:44.921016    6944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 11:39:44.921016    6944 node_conditions.go:123] node cpu capacity is 2
	I0923 11:39:44.921016    6944 node_conditions.go:105] duration metric: took 5.6235ms to run NodePressure ...
	I0923 11:39:44.921101    6944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 11:39:45.085986    6944 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0923 11:39:45.198344    6944 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0923 11:39:45.200614    6944 command_runner.go:130] ! W0923 11:39:45.190039    6593 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:45.200736    6944 command_runner.go:130] ! W0923 11:39:45.190843    6593 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:39:45.200736    6944 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0923 11:39:45.201037    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0923 11:39:45.201037    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:45.201112    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:45.201112    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:45.212924    6944 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 11:39:45.213029    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:45.213029    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:45.213029    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:45.213029    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:45.213029    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:45.213029    6944 round_trippers.go:580]     Audit-Id: edbb302f-2b4d-4e05-869a-fcca0d2ed3eb
	I0923 11:39:45.213029    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:45.213643    6944 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31044 chars]
	I0923 11:39:45.215269    6944 kubeadm.go:739] kubelet initialised
	I0923 11:39:45.215269    6944 kubeadm.go:740] duration metric: took 14.5327ms waiting for restarted kubelet to initialise ...
	I0923 11:39:45.215269    6944 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:39:45.215380    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods
	I0923 11:39:45.215380    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:45.215380    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:45.215380    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:45.222321    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:39:45.222321    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:45.222377    6944 round_trippers.go:580]     Audit-Id: e567ea93-b528-4582-a7c7-f36363e8b659
	I0923 11:39:45.222377    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:45.222377    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:45.222377    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:45.222377    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:45.222377    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:45.223013    6944 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52213 chars]
	I0923 11:39:45.224785    6944 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-68rgs" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:45.225398    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:45.225398    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:45.225398    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:45.225460    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:45.227957    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:45.227957    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:45.227957    6944 round_trippers.go:580]     Audit-Id: eb8f0ee5-bfd6-4402-a531-10729c5bc86b
	I0923 11:39:45.227957    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:45.227957    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:45.227957    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:45.227957    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:45.227957    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:45.228897    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6937 chars]
	I0923 11:39:45.229901    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:45.229968    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:45.229968    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:45.229968    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:45.232423    6944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 11:39:45.232423    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:45.232423    6944 round_trippers.go:580]     Audit-Id: 2829c867-531a-4566-ab44-89c75ceef139
	I0923 11:39:45.232488    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:45.232488    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:45.232488    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:45.232488    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:45.232488    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:45.232610    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:45.725699    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:45.725699    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:45.725699    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:45.725699    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:45.730025    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:45.730025    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:45.730025    6944 round_trippers.go:580]     Audit-Id: d292d2e4-429f-404b-91c5-fbcabe142807
	I0923 11:39:45.730025    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:45.730025    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:45.730025    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:45.730025    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:45.730025    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:45.730025    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6937 chars]
	I0923 11:39:45.732127    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:45.732289    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:45.732289    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:45.732289    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:45.736185    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:45.736185    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:45.736291    6944 round_trippers.go:580]     Audit-Id: 71cd5ff9-dc7f-4037-b0c7-60a858f9cd75
	I0923 11:39:45.736291    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:45.736291    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:45.736291    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:45.736291    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:45.736291    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:45 GMT
	I0923 11:39:45.736620    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:46.226152    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:46.226152    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:46.226152    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:46.226152    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:46.230379    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:46.230379    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:46.230546    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:46.230546    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:46 GMT
	I0923 11:39:46.230546    6944 round_trippers.go:580]     Audit-Id: a5abcdac-906b-4610-8d64-1b2fdda633a3
	I0923 11:39:46.230546    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:46.230546    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:46.230546    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:46.230881    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6937 chars]
	I0923 11:39:46.232073    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:46.232132    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:46.232132    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:46.232196    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:46.234591    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:46.235017    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:46.235017    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:46.235017    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:46.235017    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:46.235017    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:46.235017    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:46 GMT
	I0923 11:39:46.235017    6944 round_trippers.go:580]     Audit-Id: 7a37b62c-62ef-424d-be76-08fdb6d7fb2e
	I0923 11:39:46.235189    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:46.726056    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:46.726056    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:46.726056    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:46.726056    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:46.730317    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:46.730446    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:46.730446    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:46.730487    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:46.730487    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:46.730523    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:46 GMT
	I0923 11:39:46.730523    6944 round_trippers.go:580]     Audit-Id: 97d577b2-dd8c-4a14-9340-c9bdb4d8de3d
	I0923 11:39:46.730523    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:46.730835    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6937 chars]
	I0923 11:39:46.732014    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:46.732084    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:46.732084    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:46.732084    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:46.734565    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:46.734598    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:46.734598    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:46.734598    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:46 GMT
	I0923 11:39:46.734598    6944 round_trippers.go:580]     Audit-Id: 76954c60-0342-4b6c-b6f1-f57d7251a8de
	I0923 11:39:46.734598    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:46.734598    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:46.734598    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:46.734813    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:47.227158    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:47.227299    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:47.227299    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:47.227299    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:47.231249    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:47.231249    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:47.231249    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:47.231249    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:47.231249    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:47 GMT
	I0923 11:39:47.231249    6944 round_trippers.go:580]     Audit-Id: 09c83cc6-1307-4c4d-84a3-0015abd08375
	I0923 11:39:47.231249    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:47.231249    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:47.235144    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6937 chars]
	I0923 11:39:47.235763    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:47.235763    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:47.235843    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:47.235843    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:47.238932    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:47.238932    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:47.238932    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:47 GMT
	I0923 11:39:47.238932    6944 round_trippers.go:580]     Audit-Id: 492010ff-a511-474b-95e5-ee2edd7e1404
	I0923 11:39:47.238932    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:47.238932    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:47.238932    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:47.238932    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:47.238932    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:47.239874    6944 pod_ready.go:103] pod "coredns-7c65d6cfc9-68rgs" in "kube-system" namespace has status "Ready":"False"
	I0923 11:39:47.726176    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:47.726176    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:47.726176    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:47.726176    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:47.730636    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:47.730750    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:47.730750    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:47.730750    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:47.730750    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:47.730858    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:47 GMT
	I0923 11:39:47.730918    6944 round_trippers.go:580]     Audit-Id: 191b7663-fa2d-42f1-84cb-b750576217a6
	I0923 11:39:47.730918    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:47.730979    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6937 chars]
	I0923 11:39:47.731852    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:47.731852    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:47.731852    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:47.731852    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:47.734661    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:47.734661    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:47.734661    6944 round_trippers.go:580]     Audit-Id: 5abe773c-eba7-4fa1-a670-e262f2bd2f7d
	I0923 11:39:47.734661    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:47.734661    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:47.734661    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:47.734661    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:47.734661    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:47 GMT
	I0923 11:39:47.735187    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:48.225816    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:48.225816    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:48.225816    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:48.225816    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:48.230306    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:48.230306    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:48.230306    6944 round_trippers.go:580]     Audit-Id: 71b086cd-126d-4638-855e-c6d1cb6fb91e
	I0923 11:39:48.230306    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:48.230306    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:48.230306    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:48.230306    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:48.230306    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:48 GMT
	I0923 11:39:48.230463    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6937 chars]
	I0923 11:39:48.231221    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:48.231221    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:48.231221    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:48.231221    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:48.233269    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:48.234163    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:48.234163    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:48.234163    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:48.234163    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:48.234163    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:48 GMT
	I0923 11:39:48.234163    6944 round_trippers.go:580]     Audit-Id: 6ba1b59a-f7c1-4feb-bb4c-ab78a354a6b7
	I0923 11:39:48.234163    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:48.234276    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:48.726248    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:48.726248    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:48.726248    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:48.726248    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:48.730353    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:48.730898    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:48.730898    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:48.730898    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:48.730898    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:48.730898    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:48 GMT
	I0923 11:39:48.730898    6944 round_trippers.go:580]     Audit-Id: edee47cb-3e26-4e95-bcd4-790cc2545e0a
	I0923 11:39:48.730898    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:48.731210    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"574","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6937 chars]
	I0923 11:39:48.731978    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:48.732078    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:48.732078    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:48.732078    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:48.735071    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:48.735132    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:48.735132    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:48.735132    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:48.735193    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:48.735193    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:48 GMT
	I0923 11:39:48.735193    6944 round_trippers.go:580]     Audit-Id: c5b3860f-9835-4306-a8da-2af7558aff99
	I0923 11:39:48.735193    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:48.736055    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:49.226171    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:49.226171    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:49.226171    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:49.226171    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:49.230692    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:49.231029    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:49.231029    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:49.231029    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:49.231029    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:49.231029    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:49.231131    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:49 GMT
	I0923 11:39:49.231131    6944 round_trippers.go:580]     Audit-Id: cc09e6ba-ab51-4de0-86e6-19354624b45b
	I0923 11:39:49.231793    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"584","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6708 chars]
	I0923 11:39:49.232758    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:49.232758    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:49.232846    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:49.232846    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:49.237269    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:49.237269    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:49.237269    6944 round_trippers.go:580]     Audit-Id: 31b1576d-5f53-41c6-a675-583763f61bea
	I0923 11:39:49.237269    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:49.237269    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:49.237269    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:49.237269    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:49.237269    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:49 GMT
	I0923 11:39:49.238260    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:49.238260    6944 pod_ready.go:93] pod "coredns-7c65d6cfc9-68rgs" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:49.238260    6944 pod_ready.go:82] duration metric: took 4.0132038s for pod "coredns-7c65d6cfc9-68rgs" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:49.238260    6944 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:49.238260    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:49.239076    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:49.239076    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:49.239076    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:49.243373    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:49.243373    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:49.243373    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:49.243373    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:49.243373    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:49 GMT
	I0923 11:39:49.243373    6944 round_trippers.go:580]     Audit-Id: 0665cb52-2a5b-4b75-9ac1-96850d8d29f3
	I0923 11:39:49.243373    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:49.243373    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:49.243373    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:49.243373    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:49.243373    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:49.243373    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:49.243373    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:49.248743    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:39:49.248743    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:49.249288    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:49.249288    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:49.249288    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:49.249288    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:49 GMT
	I0923 11:39:49.249288    6944 round_trippers.go:580]     Audit-Id: 6e3d460e-0176-4672-b9db-7f1f1e369581
	I0923 11:39:49.249288    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:49.249518    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:49.738873    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:49.738873    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:49.738873    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:49.738873    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:49.743346    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:49.743426    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:49.743501    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:49.743501    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:49.743501    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:49.743501    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:49 GMT
	I0923 11:39:49.743501    6944 round_trippers.go:580]     Audit-Id: 8963afb8-dd36-4e8b-be7e-774afdbfc6f8
	I0923 11:39:49.743501    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:49.743877    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:49.744850    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:49.744938    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:49.744938    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:49.744938    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:49.750217    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:39:49.750217    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:49.750217    6944 round_trippers.go:580]     Audit-Id: 57dddf15-4abc-4aab-ba96-037168c1404d
	I0923 11:39:49.750217    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:49.750217    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:49.750217    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:49.750217    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:49.750217    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:49 GMT
	I0923 11:39:49.750217    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:50.238418    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:50.238418    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:50.238418    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:50.238418    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:50.242514    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:50.242608    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:50.242608    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:50.242608    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:50 GMT
	I0923 11:39:50.242608    6944 round_trippers.go:580]     Audit-Id: ad82dd34-2b7b-4b87-92b4-5063103fbc20
	I0923 11:39:50.242608    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:50.242608    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:50.242700    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:50.242896    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:50.243691    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:50.243798    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:50.243798    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:50.243798    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:50.247055    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:50.247453    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:50.247453    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:50.247453    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:50.247453    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:50 GMT
	I0923 11:39:50.247453    6944 round_trippers.go:580]     Audit-Id: 341c3944-ec29-461f-ab0a-543f8b31f44b
	I0923 11:39:50.247453    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:50.247453    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:50.247861    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:50.738969    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:50.738969    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:50.738969    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:50.738969    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:50.743472    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:50.743551    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:50.743551    6944 round_trippers.go:580]     Audit-Id: 6667cd3d-53ce-45a5-946a-97a763995a6b
	I0923 11:39:50.743551    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:50.743551    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:50.743551    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:50.743551    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:50.743551    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:50 GMT
	I0923 11:39:50.743893    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:50.744780    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:50.744780    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:50.744780    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:50.744780    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:50.750061    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:39:50.750061    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:50.750061    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:50.750061    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:50.750061    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:50.750061    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:50 GMT
	I0923 11:39:50.750061    6944 round_trippers.go:580]     Audit-Id: 7912750b-2f61-4736-873b-25344a9e56c0
	I0923 11:39:50.750061    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:50.750061    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:51.238535    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:51.238535    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:51.238535    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:51.238535    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:51.243366    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:51.243366    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:51.243510    6944 round_trippers.go:580]     Audit-Id: 4a5374e0-c2fb-41f6-96ec-e352468b4670
	I0923 11:39:51.243510    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:51.243510    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:51.243510    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:51.243510    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:51.243510    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:51 GMT
	I0923 11:39:51.243906    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:51.244637    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:51.244637    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:51.244637    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:51.244637    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:51.249099    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:51.249218    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:51.249218    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:51.249218    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:51.249218    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:51.249218    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:51 GMT
	I0923 11:39:51.249218    6944 round_trippers.go:580]     Audit-Id: 0ae53266-11d7-483a-ab61-e8d1bd97cc68
	I0923 11:39:51.249218    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:51.249412    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:51.249794    6944 pod_ready.go:103] pod "etcd-functional-877700" in "kube-system" namespace has status "Ready":"False"
	I0923 11:39:51.739060    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:51.739060    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:51.739060    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:51.739060    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:51.743340    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:51.743340    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:51.743340    6944 round_trippers.go:580]     Audit-Id: 6ca48a00-e993-47cb-9f2b-6b09b0388c08
	I0923 11:39:51.743340    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:51.743340    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:51.743340    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:51.743340    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:51.743340    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:51 GMT
	I0923 11:39:51.743340    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:51.744224    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:51.744224    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:51.744224    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:51.744224    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:51.747855    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:51.747855    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:51.747937    6944 round_trippers.go:580]     Audit-Id: a13046a2-a6fe-4647-a764-ed1408ce1340
	I0923 11:39:51.747937    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:51.747937    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:51.747937    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:51.748015    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:51.748042    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:51 GMT
	I0923 11:39:51.748415    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:52.239758    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:52.239758    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:52.239758    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:52.239758    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:52.243793    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:52.243908    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:52.243908    6944 round_trippers.go:580]     Audit-Id: 5c31d433-26b5-4238-ac30-e6eaf87c7804
	I0923 11:39:52.243908    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:52.243908    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:52.243908    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:52.243908    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:52.243908    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:52 GMT
	I0923 11:39:52.244102    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:52.245075    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:52.245075    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:52.245075    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:52.245150    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:52.248664    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:52.248692    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:52.248692    6944 round_trippers.go:580]     Audit-Id: 83695c2c-7acf-4c77-95eb-d28c9ff39462
	I0923 11:39:52.248692    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:52.248692    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:52.248692    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:52.248692    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:52.248811    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:52 GMT
	I0923 11:39:52.249073    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:52.739773    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:52.739773    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:52.739773    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:52.739773    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:52.746613    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:39:52.746613    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:52.746613    6944 round_trippers.go:580]     Audit-Id: 4fbf3dcf-2289-4a00-b01b-37abbe3c40ad
	I0923 11:39:52.746747    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:52.746747    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:52.746747    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:52.746747    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:52.746747    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:52 GMT
	I0923 11:39:52.747526    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:52.748318    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:52.748318    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:52.748318    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:52.748318    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:52.751667    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:52.751722    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:52.751722    6944 round_trippers.go:580]     Audit-Id: 49cf2ca8-db80-4a58-8507-447fa7ebd94a
	I0923 11:39:52.751801    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:52.751801    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:52.751801    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:52.751801    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:52.751801    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:52 GMT
	I0923 11:39:52.752246    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:53.239672    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:53.239672    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:53.239672    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:53.239672    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:53.244481    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:53.244539    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:53.244539    6944 round_trippers.go:580]     Audit-Id: e552f32e-3ec1-49d4-a37c-05ba315a696c
	I0923 11:39:53.244606    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:53.244606    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:53.244606    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:53.244670    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:53.244670    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:53 GMT
	I0923 11:39:53.244874    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:53.245664    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:53.245730    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:53.245730    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:53.245730    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:53.248607    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:53.248607    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:53.248675    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:53.248675    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:53 GMT
	I0923 11:39:53.248675    6944 round_trippers.go:580]     Audit-Id: 9dca7521-97d8-4b1f-96e4-dd3276e663ff
	I0923 11:39:53.248675    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:53.248675    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:53.248675    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:53.248895    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:53.739112    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:53.739112    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:53.739112    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:53.739112    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:53.743265    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:53.743265    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:53.743265    6944 round_trippers.go:580]     Audit-Id: 05ebd0a0-ae23-4693-82ed-4b57e349ccad
	I0923 11:39:53.743265    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:53.743265    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:53.743385    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:53.743385    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:53.743385    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:53 GMT
	I0923 11:39:53.743670    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:53.744146    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:53.744146    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:53.744146    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:53.744146    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:53.750197    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:39:53.750197    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:53.750197    6944 round_trippers.go:580]     Audit-Id: c7f5a115-c16c-43dc-894c-3a935fbda7a9
	I0923 11:39:53.750197    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:53.750197    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:53.750197    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:53.750197    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:53.750197    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:53 GMT
	I0923 11:39:53.750197    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:53.751797    6944 pod_ready.go:103] pod "etcd-functional-877700" in "kube-system" namespace has status "Ready":"False"
	I0923 11:39:54.239810    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:54.239810    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:54.239810    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:54.239810    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:54.244587    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:54.244587    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:54.244672    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:54 GMT
	I0923 11:39:54.244672    6944 round_trippers.go:580]     Audit-Id: a4bc6e45-9b12-4551-9e99-a90eec5f071b
	I0923 11:39:54.244672    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:54.244672    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:54.244672    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:54.244672    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:54.244898    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:54.245707    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:54.245707    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:54.245707    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:54.245707    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:54.249091    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:54.249190    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:54.249190    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:54.249284    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:54.249284    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:54.249284    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:54.249284    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:54 GMT
	I0923 11:39:54.249284    6944 round_trippers.go:580]     Audit-Id: 58ed8f17-6d84-41d4-ab41-f26c4974d629
	I0923 11:39:54.249616    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:54.739606    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:54.739606    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:54.739606    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:54.739606    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:54.744082    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:54.744082    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:54.744082    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:54 GMT
	I0923 11:39:54.744082    6944 round_trippers.go:580]     Audit-Id: 60347a04-dc89-4e65-a716-5be9f37207b8
	I0923 11:39:54.744082    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:54.744082    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:54.744602    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:54.744602    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:54.744695    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:54.745313    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:54.745313    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:54.745313    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:54.745313    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:54.748790    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:54.748790    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:54.748790    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:54.748790    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:54.748790    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:54.748790    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:54.748790    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:54 GMT
	I0923 11:39:54.748790    6944 round_trippers.go:580]     Audit-Id: 39a49bfc-41bd-4b85-8bfe-0bd726668649
	I0923 11:39:54.749077    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:55.239368    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:55.240063    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:55.240063    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:55.240063    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:55.244115    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:55.244194    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:55.244194    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:55 GMT
	I0923 11:39:55.244194    6944 round_trippers.go:580]     Audit-Id: 8f6bb23f-b6d1-423d-85d4-7521a62722f9
	I0923 11:39:55.244194    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:55.244194    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:55.244194    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:55.244277    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:55.244733    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:55.245320    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:55.245372    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:55.245372    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:55.245372    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:55.247017    6944 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 11:39:55.247017    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:55.247017    6944 round_trippers.go:580]     Audit-Id: 647b67f9-057b-4342-b5d1-6aa7950a989a
	I0923 11:39:55.247017    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:55.248031    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:55.248031    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:55.248031    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:55.248031    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:55 GMT
	I0923 11:39:55.248155    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:55.738918    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:55.738918    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:55.738918    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:55.738918    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:55.743532    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:55.743532    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:55.743532    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:55 GMT
	I0923 11:39:55.743532    6944 round_trippers.go:580]     Audit-Id: 057e93fd-6e8d-42c3-8241-6c47f4807c91
	I0923 11:39:55.744080    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:55.744080    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:55.744080    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:55.744080    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:55.744727    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"538","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6910 chars]
	I0923 11:39:55.745776    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:55.745839    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:55.745902    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:55.745902    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:55.749530    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:55.749600    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:55.749600    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:55.749600    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:55.749600    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:55.749600    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:55.749600    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:55 GMT
	I0923 11:39:55.749600    6944 round_trippers.go:580]     Audit-Id: 0547bf8c-63f9-4da3-b629-f2c2a24dce4a
	I0923 11:39:55.749600    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:56.238938    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:56.238938    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.238938    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.238938    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.243777    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:56.243777    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.243929    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.243929    6944 round_trippers.go:580]     Audit-Id: cb3e11e3-480d-4828-82ff-f974d02d630b
	I0923 11:39:56.243929    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.243929    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.243929    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.243929    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.244164    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"596","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6686 chars]
	I0923 11:39:56.245066    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:56.245174    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.245174    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.245174    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.255483    6944 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 11:39:56.255925    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.255956    6944 round_trippers.go:580]     Audit-Id: 3ee6d0c6-a2cc-416c-ab15-39dece9c951b
	I0923 11:39:56.255956    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.255956    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.256002    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.256002    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.256035    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.256537    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:56.257105    6944 pod_ready.go:93] pod "etcd-functional-877700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:56.257105    6944 pod_ready.go:82] duration metric: took 7.0183717s for pod "etcd-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.257105    6944 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.257105    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700
	I0923 11:39:56.257105    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.257105    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.257105    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.262938    6944 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 11:39:56.262938    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.262938    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.262938    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.262938    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.262938    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.262938    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.262938    6944 round_trippers.go:580]     Audit-Id: 3b409a84-3bb8-48b8-a9a4-343a4375e875
	I0923 11:39:56.263607    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-877700","namespace":"kube-system","uid":"8a3ca5dc-4459-41b9-bd5a-c2a82a2224c4","resourceVersion":"592","creationTimestamp":"2024-09-23T11:37:13Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.157.210:8441","kubernetes.io/config.hash":"d94a2590761a98c126cc01e55566a60c","kubernetes.io/config.mirror":"d94a2590761a98c126cc01e55566a60c","kubernetes.io/config.seen":"2024-09-23T11:37:07.489508743Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7912 chars]
	I0923 11:39:56.264345    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:56.264345    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.264345    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.264345    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.268120    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:56.268120    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.268120    6944 round_trippers.go:580]     Audit-Id: ec78b695-8fab-4e30-8597-8d3382939b62
	I0923 11:39:56.268120    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.268120    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.268120    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.268120    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.268120    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.268120    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:56.268120    6944 pod_ready.go:93] pod "kube-apiserver-functional-877700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:56.268120    6944 pod_ready.go:82] duration metric: took 11.0137ms for pod "kube-apiserver-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.268120    6944 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.268120    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-877700
	I0923 11:39:56.268120    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.269131    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.269131    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.271881    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:56.271881    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.271881    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.271881    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.271881    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.271881    6944 round_trippers.go:580]     Audit-Id: 4bc69956-0aed-40b3-9c3c-68cf415de43f
	I0923 11:39:56.271881    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.271881    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.272490    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-877700","namespace":"kube-system","uid":"cf271775-be5e-4d15-91cf-0284cdcbe3fc","resourceVersion":"594","creationTimestamp":"2024-09-23T11:37:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a636be2f1d308ffa848ec077882897f7","kubernetes.io/config.mirror":"a636be2f1d308ffa848ec077882897f7","kubernetes.io/config.seen":"2024-09-23T11:37:14.794198204Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0923 11:39:56.272813    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:56.272813    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.272813    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.272813    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.276435    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:56.276435    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.276435    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.276435    6944 round_trippers.go:580]     Audit-Id: 97de19e1-7a0f-408b-9713-b502be812ba5
	I0923 11:39:56.276435    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.276435    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.276435    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.276435    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.276435    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:56.276435    6944 pod_ready.go:93] pod "kube-controller-manager-functional-877700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:56.276435    6944 pod_ready.go:82] duration metric: took 8.3153ms for pod "kube-controller-manager-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.276435    6944 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-njj9d" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.276435    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-proxy-njj9d
	I0923 11:39:56.276435    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.276435    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.276435    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.285581    6944 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 11:39:56.285581    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.285581    6944 round_trippers.go:580]     Audit-Id: 5c061be3-9c19-4b43-9608-c1ff0dab5321
	I0923 11:39:56.285898    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.285898    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.285898    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.285898    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.285898    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.286220    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-njj9d","generateName":"kube-proxy-","namespace":"kube-system","uid":"47a01996-aa9d-45b6-90ef-e93fa6bff34b","resourceVersion":"544","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"301ec871-4455-4d61-920e-b2e06abb81ec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"301ec871-4455-4d61-920e-b2e06abb81ec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6406 chars]
	I0923 11:39:56.286434    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:56.286434    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.286434    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.286434    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.293571    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 11:39:56.293571    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.293571    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.293571    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.293571    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.293571    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.293571    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.293571    6944 round_trippers.go:580]     Audit-Id: 407f4ef6-974c-49f2-94e3-c8b5736022ca
	I0923 11:39:56.293571    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:56.293571    6944 pod_ready.go:93] pod "kube-proxy-njj9d" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:56.293571    6944 pod_ready.go:82] duration metric: took 17.1344ms for pod "kube-proxy-njj9d" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.293571    6944 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.293571    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-877700
	I0923 11:39:56.293571    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.293571    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.293571    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.296630    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:56.297039    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.297074    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.297074    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.297074    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.297074    6944 round_trippers.go:580]     Audit-Id: f83d31ff-8b3d-4278-8bc4-756e99903f60
	I0923 11:39:56.297074    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.297074    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.300990    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-877700","namespace":"kube-system","uid":"99b899a7-2a5d-4cfe-a751-8c80b7f4a01c","resourceVersion":"534","creationTimestamp":"2024-09-23T11:37:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"75b601e091011beb813ec9f60a3f53d5","kubernetes.io/config.mirror":"75b601e091011beb813ec9f60a3f53d5","kubernetes.io/config.seen":"2024-09-23T11:37:07.489502244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0923 11:39:56.301945    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:56.301995    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.302034    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.302085    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.308220    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:39:56.308220    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.308220    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.308220    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.308220    6944 round_trippers.go:580]     Audit-Id: 3362950f-6638-4ced-8411-e88078c6a73a
	I0923 11:39:56.308220    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.308220    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.308220    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.308764    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:56.308930    6944 pod_ready.go:93] pod "kube-scheduler-functional-877700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:56.308930    6944 pod_ready.go:82] duration metric: took 15.3583ms for pod "kube-scheduler-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.308930    6944 pod_ready.go:39] duration metric: took 11.0929121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:39:56.308930    6944 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:39:56.326209    6944 command_runner.go:130] > -16
	I0923 11:39:56.326242    6944 ops.go:34] apiserver oom_adj: -16
	I0923 11:39:56.326242    6944 kubeadm.go:597] duration metric: took 29.3453106s to restartPrimaryControlPlane
	I0923 11:39:56.326300    6944 kubeadm.go:394] duration metric: took 29.4155154s to StartCluster
	I0923 11:39:56.326335    6944 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:39:56.326590    6944 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:39:56.327675    6944 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:39:56.328946    6944 start.go:235] Will wait 6m0s for node &{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:39:56.328946    6944 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 11:39:56.328946    6944 addons.go:69] Setting storage-provisioner=true in profile "functional-877700"
	I0923 11:39:56.328946    6944 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:39:56.328946    6944 addons.go:234] Setting addon storage-provisioner=true in "functional-877700"
	W0923 11:39:56.328946    6944 addons.go:243] addon storage-provisioner should already be in state true
	I0923 11:39:56.328946    6944 addons.go:69] Setting default-storageclass=true in profile "functional-877700"
	I0923 11:39:56.329472    6944 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-877700"
	I0923 11:39:56.329568    6944 host.go:66] Checking if "functional-877700" exists ...
	I0923 11:39:56.329630    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:39:56.330823    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:39:56.332017    6944 out.go:177] * Verifying Kubernetes components...
	I0923 11:39:56.346182    6944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:39:56.612905    6944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:39:56.638075    6944 node_ready.go:35] waiting up to 6m0s for node "functional-877700" to be "Ready" ...
	I0923 11:39:56.638176    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:56.638176    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.638176    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.638176    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.642325    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:56.642325    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.642325    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.642325    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.642430    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.642430    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.642430    6944 round_trippers.go:580]     Audit-Id: 0a2c46df-45e3-4240-821d-33d8dfc32bb3
	I0923 11:39:56.642430    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.643037    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:56.643131    6944 node_ready.go:49] node "functional-877700" has status "Ready":"True"
	I0923 11:39:56.643131    6944 node_ready.go:38] duration metric: took 4.9544ms for node "functional-877700" to be "Ready" ...
	I0923 11:39:56.643131    6944 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:39:56.643667    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods
	I0923 11:39:56.643667    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.643667    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.643667    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.647976    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:56.647976    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.648053    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:56 GMT
	I0923 11:39:56.648053    6944 round_trippers.go:580]     Audit-Id: 6f213cc2-903e-48be-9c60-5685271f27ca
	I0923 11:39:56.648053    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.648053    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.648053    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.648053    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.648739    6944 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"584","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51254 chars]
	I0923 11:39:56.650510    6944 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-68rgs" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:56.839518    6944 request.go:632] Waited for 188.4784ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:56.839518    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-68rgs
	I0923 11:39:56.839518    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:56.839518    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:56.839518    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:56.843818    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:56.843905    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:56.843972    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:57 GMT
	I0923 11:39:56.843972    6944 round_trippers.go:580]     Audit-Id: c91b05f4-2bb9-4d9e-ab6c-6475d5d000e1
	I0923 11:39:56.843972    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:56.843972    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:56.843972    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:56.843972    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:56.843972    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"584","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6708 chars]
	I0923 11:39:57.039844    6944 request.go:632] Waited for 195.1379ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:57.039844    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:57.040258    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:57.040258    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:57.040258    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:57.044480    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:57.044614    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:57.044614    6944 round_trippers.go:580]     Audit-Id: dcd411de-ccd7-4bde-b52a-68d9eb14eeb4
	I0923 11:39:57.044614    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:57.044614    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:57.044614    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:57.044614    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:57.044614    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:57 GMT
	I0923 11:39:57.045836    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:57.046316    6944 pod_ready.go:93] pod "coredns-7c65d6cfc9-68rgs" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:57.046316    6944 pod_ready.go:82] duration metric: took 395.7794ms for pod "coredns-7c65d6cfc9-68rgs" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:57.046316    6944 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:57.239375    6944 request.go:632] Waited for 192.877ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:57.239375    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700
	I0923 11:39:57.239375    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:57.239375    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:57.239375    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:57.243181    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:57.243344    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:57.243344    6944 round_trippers.go:580]     Audit-Id: e40ef0ff-ccbe-488d-9e74-d9e53f7b0985
	I0923 11:39:57.243344    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:57.243344    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:57.243571    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:57.243571    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:57.243571    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:57 GMT
	I0923 11:39:57.243700    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-877700","namespace":"kube-system","uid":"517286c0-c0d8-40d8-8952-8002342551dd","resourceVersion":"596","creationTimestamp":"2024-09-23T11:37:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.157.210:2379","kubernetes.io/config.hash":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.mirror":"1a2024253238820dd6dd104df30a6dbf","kubernetes.io/config.seen":"2024-09-23T11:37:07.489507043Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6686 chars]
	I0923 11:39:57.439295    6944 request.go:632] Waited for 194.8614ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:57.439295    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:57.439295    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:57.439295    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:57.439295    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:57.443377    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:57.443377    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:57.443377    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:57.443377    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:57.443377    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:57.443522    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:57 GMT
	I0923 11:39:57.443522    6944 round_trippers.go:580]     Audit-Id: 84270272-35bc-42a4-ae93-da3b76a02643
	I0923 11:39:57.443522    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:57.443522    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:57.444172    6944 pod_ready.go:93] pod "etcd-functional-877700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:57.444280    6944 pod_ready.go:82] duration metric: took 397.9368ms for pod "etcd-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:57.444280    6944 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:57.640034    6944 request.go:632] Waited for 195.6388ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700
	I0923 11:39:57.640361    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700
	I0923 11:39:57.640425    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:57.640425    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:57.640425    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:57.643061    6944 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 11:39:57.643061    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:57.643061    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:57 GMT
	I0923 11:39:57.643061    6944 round_trippers.go:580]     Audit-Id: 50142b13-73e4-4ecf-9b54-939ebf53070c
	I0923 11:39:57.643061    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:57.643984    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:57.643984    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:57.643984    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:57.644284    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-877700","namespace":"kube-system","uid":"8a3ca5dc-4459-41b9-bd5a-c2a82a2224c4","resourceVersion":"592","creationTimestamp":"2024-09-23T11:37:13Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.157.210:8441","kubernetes.io/config.hash":"d94a2590761a98c126cc01e55566a60c","kubernetes.io/config.mirror":"d94a2590761a98c126cc01e55566a60c","kubernetes.io/config.seen":"2024-09-23T11:37:07.489508743Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7912 chars]
	I0923 11:39:57.840217    6944 request.go:632] Waited for 195.2588ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:57.840217    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:57.840217    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:57.840217    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:57.840217    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:57.843468    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:57.843468    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:57.843468    6944 round_trippers.go:580]     Audit-Id: 029d7b73-8734-4c55-956e-4e7b5b764795
	I0923 11:39:57.843770    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:57.843770    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:57.843770    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:57.843770    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:57.843770    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:58 GMT
	I0923 11:39:57.843945    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:57.844439    6944 pod_ready.go:93] pod "kube-apiserver-functional-877700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:57.844563    6944 pod_ready.go:82] duration metric: took 400.2555ms for pod "kube-apiserver-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:57.844563    6944 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:58.039755    6944 request.go:632] Waited for 195.0855ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-877700
	I0923 11:39:58.039755    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-877700
	I0923 11:39:58.039755    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:58.039755    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:58.039755    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:58.043681    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:58.043681    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:58.043681    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:58.043681    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:58 GMT
	I0923 11:39:58.043681    6944 round_trippers.go:580]     Audit-Id: 0795988d-e1f9-4fbc-84f9-ba76e8a70b21
	I0923 11:39:58.043681    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:58.043681    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:58.043681    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:58.044024    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-877700","namespace":"kube-system","uid":"cf271775-be5e-4d15-91cf-0284cdcbe3fc","resourceVersion":"594","creationTimestamp":"2024-09-23T11:37:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a636be2f1d308ffa848ec077882897f7","kubernetes.io/config.mirror":"a636be2f1d308ffa848ec077882897f7","kubernetes.io/config.seen":"2024-09-23T11:37:14.794198204Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0923 11:39:58.239909    6944 request.go:632] Waited for 195.3351ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:58.239909    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:58.239909    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:58.239909    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:58.239909    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:58.242929    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:58.243496    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:58.243496    6944 round_trippers.go:580]     Audit-Id: b924da27-6c90-4f58-ab45-8914e3dbafe0
	I0923 11:39:58.243496    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:58.243496    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:58.243496    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:58.243496    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:58.243496    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:58 GMT
	I0923 11:39:58.243680    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:58.243680    6944 pod_ready.go:93] pod "kube-controller-manager-functional-877700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:58.243680    6944 pod_ready.go:82] duration metric: took 399.0904ms for pod "kube-controller-manager-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:58.243680    6944 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-njj9d" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:58.286175    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:39:58.286175    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:58.286059    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:39:58.286543    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:39:58.287704    6944 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:39:58.290298    6944 kapi.go:59] client config for functional-877700: &rest.Config{Host:"https://172.19.157.210:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 11:39:58.290298    6944 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:39:58.290916    6944 addons.go:234] Setting addon default-storageclass=true in "functional-877700"
	W0923 11:39:58.290916    6944 addons.go:243] addon default-storageclass should already be in state true
	I0923 11:39:58.290916    6944 host.go:66] Checking if "functional-877700" exists ...
	I0923 11:39:58.292094    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:39:58.292687    6944 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:39:58.292687    6944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:39:58.292687    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:39:58.439907    6944 request.go:632] Waited for 196.2143ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-proxy-njj9d
	I0923 11:39:58.439907    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-proxy-njj9d
	I0923 11:39:58.439907    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:58.439907    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:58.439907    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:58.444013    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:58.444013    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:58.444013    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:58.444117    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:58 GMT
	I0923 11:39:58.444117    6944 round_trippers.go:580]     Audit-Id: ab376772-7f31-4209-a23b-50bdbfde8e8a
	I0923 11:39:58.444117    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:58.444117    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:58.444117    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:58.444413    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-njj9d","generateName":"kube-proxy-","namespace":"kube-system","uid":"47a01996-aa9d-45b6-90ef-e93fa6bff34b","resourceVersion":"544","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"301ec871-4455-4d61-920e-b2e06abb81ec","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"301ec871-4455-4d61-920e-b2e06abb81ec\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6406 chars]
	I0923 11:39:58.639980    6944 request.go:632] Waited for 194.8529ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:58.640286    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:58.640286    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:58.640286    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:58.640286    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:58.643625    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:58.644357    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:58.644357    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:58.644357    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:58.644357    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:58.644357    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:58 GMT
	I0923 11:39:58.644357    6944 round_trippers.go:580]     Audit-Id: 6f7220a0-b61a-429b-8429-6470640db208
	I0923 11:39:58.644357    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:58.645332    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:58.645768    6944 pod_ready.go:93] pod "kube-proxy-njj9d" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:58.645850    6944 pod_ready.go:82] duration metric: took 402.0614ms for pod "kube-proxy-njj9d" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:58.645850    6944 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:58.839402    6944 request.go:632] Waited for 193.5392ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-877700
	I0923 11:39:58.839402    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-877700
	I0923 11:39:58.839402    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:58.839402    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:58.839402    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:58.842958    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:58.842958    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:58.842958    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:58.842958    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:59 GMT
	I0923 11:39:58.842958    6944 round_trippers.go:580]     Audit-Id: 2935ae16-d241-4353-a661-8993ecb695e0
	I0923 11:39:58.842958    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:58.842958    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:58.842958    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:58.842958    6944 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-877700","namespace":"kube-system","uid":"99b899a7-2a5d-4cfe-a751-8c80b7f4a01c","resourceVersion":"534","creationTimestamp":"2024-09-23T11:37:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"75b601e091011beb813ec9f60a3f53d5","kubernetes.io/config.mirror":"75b601e091011beb813ec9f60a3f53d5","kubernetes.io/config.seen":"2024-09-23T11:37:07.489502244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0923 11:39:59.039382    6944 request.go:632] Waited for 195.5764ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:59.039382    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes/functional-877700
	I0923 11:39:59.039382    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:59.039382    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:59.039382    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:59.042609    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:59.043552    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:59.043552    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:59.043552    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:59.043552    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:59 GMT
	I0923 11:39:59.043552    6944 round_trippers.go:580]     Audit-Id: 67f49b32-0912-4b0b-89ca-8f5a33376a62
	I0923 11:39:59.043552    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:59.043692    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:59.043856    6944 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T11:37:11Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0923 11:39:59.044269    6944 pod_ready.go:93] pod "kube-scheduler-functional-877700" in "kube-system" namespace has status "Ready":"True"
	I0923 11:39:59.044269    6944 pod_ready.go:82] duration metric: took 398.3926ms for pod "kube-scheduler-functional-877700" in "kube-system" namespace to be "Ready" ...
	I0923 11:39:59.044375    6944 pod_ready.go:39] duration metric: took 2.4010822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:39:59.044375    6944 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:39:59.052175    6944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:39:59.075362    6944 command_runner.go:130] > 6284
	I0923 11:39:59.075362    6944 api_server.go:72] duration metric: took 2.7462302s to wait for apiserver process to appear ...
	I0923 11:39:59.075450    6944 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:39:59.075450    6944 api_server.go:253] Checking apiserver healthz at https://172.19.157.210:8441/healthz ...
	I0923 11:39:59.082489    6944 api_server.go:279] https://172.19.157.210:8441/healthz returned 200:
	ok
	I0923 11:39:59.082489    6944 round_trippers.go:463] GET https://172.19.157.210:8441/version
	I0923 11:39:59.082633    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:59.082633    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:59.082633    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:59.082983    6944 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 11:39:59.083908    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:59.083908    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:59 GMT
	I0923 11:39:59.083908    6944 round_trippers.go:580]     Audit-Id: 9dec8d4d-61a1-446d-b9c1-3f36201d6dc5
	I0923 11:39:59.083908    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:59.083908    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:59.083908    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:59.083908    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:59.083908    6944 round_trippers.go:580]     Content-Length: 263
	I0923 11:39:59.083908    6944 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 11:39:59.084046    6944 api_server.go:141] control plane version: v1.31.1
	I0923 11:39:59.084102    6944 api_server.go:131] duration metric: took 8.6513ms to wait for apiserver health ...
	I0923 11:39:59.084151    6944 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:39:59.239908    6944 request.go:632] Waited for 155.7224ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods
	I0923 11:39:59.239908    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods
	I0923 11:39:59.240402    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:59.240402    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:59.240402    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:59.245179    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:59.245179    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:59.245179    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:59 GMT
	I0923 11:39:59.245179    6944 round_trippers.go:580]     Audit-Id: 31d43e60-358d-437a-b3cc-17440e9ea426
	I0923 11:39:59.245179    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:59.245179    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:59.245179    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:59.245179    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:59.245976    6944 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"584","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51254 chars]
	I0923 11:39:59.250602    6944 system_pods.go:59] 7 kube-system pods found
	I0923 11:39:59.250702    6944 system_pods.go:61] "coredns-7c65d6cfc9-68rgs" [207034a8-50d8-43ec-b01c-2e0a29efdc66] Running
	I0923 11:39:59.250702    6944 system_pods.go:61] "etcd-functional-877700" [517286c0-c0d8-40d8-8952-8002342551dd] Running
	I0923 11:39:59.250778    6944 system_pods.go:61] "kube-apiserver-functional-877700" [8a3ca5dc-4459-41b9-bd5a-c2a82a2224c4] Running
	I0923 11:39:59.250778    6944 system_pods.go:61] "kube-controller-manager-functional-877700" [cf271775-be5e-4d15-91cf-0284cdcbe3fc] Running
	I0923 11:39:59.250835    6944 system_pods.go:61] "kube-proxy-njj9d" [47a01996-aa9d-45b6-90ef-e93fa6bff34b] Running
	I0923 11:39:59.250835    6944 system_pods.go:61] "kube-scheduler-functional-877700" [99b899a7-2a5d-4cfe-a751-8c80b7f4a01c] Running
	I0923 11:39:59.250835    6944 system_pods.go:61] "storage-provisioner" [c5b8b930-03ac-48c2-ab92-e2d2d5d396e4] Running
	I0923 11:39:59.250835    6944 system_pods.go:74] duration metric: took 166.6725ms to wait for pod list to return data ...
	I0923 11:39:59.250920    6944 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:39:59.439570    6944 request.go:632] Waited for 188.4097ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/default/serviceaccounts
	I0923 11:39:59.439570    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/default/serviceaccounts
	I0923 11:39:59.439570    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:59.439570    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:59.439570    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:59.442778    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:39:59.442778    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:59.442778    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:59.442778    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:59.442778    6944 round_trippers.go:580]     Content-Length: 261
	I0923 11:39:59.442778    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:59 GMT
	I0923 11:39:59.442778    6944 round_trippers.go:580]     Audit-Id: a80c7527-0adc-4482-915a-bed14f88de6c
	I0923 11:39:59.442778    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:59.443483    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:59.443637    6944 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4f6c3cc2-d5e7-4d89-9f74-9d977a2f1eb0","resourceVersion":"336","creationTimestamp":"2024-09-23T11:37:19Z"}}]}
	I0923 11:39:59.444367    6944 default_sa.go:45] found service account: "default"
	I0923 11:39:59.444416    6944 default_sa.go:55] duration metric: took 193.3987ms for default service account to be created ...
	I0923 11:39:59.444416    6944 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:39:59.640158    6944 request.go:632] Waited for 195.5377ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods
	I0923 11:39:59.640158    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods
	I0923 11:39:59.640158    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:59.640158    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:59.640158    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:59.645011    6944 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 11:39:59.645011    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:59.645011    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:59.645011    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:59.645011    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:39:59 GMT
	I0923 11:39:59.645011    6944 round_trippers.go:580]     Audit-Id: 334819f7-aa76-4b20-997e-0e3924e0f409
	I0923 11:39:59.645011    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:59.645129    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:59.645863    6944 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-68rgs","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"207034a8-50d8-43ec-b01c-2e0a29efdc66","resourceVersion":"584","creationTimestamp":"2024-09-23T11:37:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c81ebd8e-0912-424b-aba8-890898aba33a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T11:37:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c81ebd8e-0912-424b-aba8-890898aba33a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51254 chars]
	I0923 11:39:59.648238    6944 system_pods.go:86] 7 kube-system pods found
	I0923 11:39:59.648313    6944 system_pods.go:89] "coredns-7c65d6cfc9-68rgs" [207034a8-50d8-43ec-b01c-2e0a29efdc66] Running
	I0923 11:39:59.648313    6944 system_pods.go:89] "etcd-functional-877700" [517286c0-c0d8-40d8-8952-8002342551dd] Running
	I0923 11:39:59.648313    6944 system_pods.go:89] "kube-apiserver-functional-877700" [8a3ca5dc-4459-41b9-bd5a-c2a82a2224c4] Running
	I0923 11:39:59.648313    6944 system_pods.go:89] "kube-controller-manager-functional-877700" [cf271775-be5e-4d15-91cf-0284cdcbe3fc] Running
	I0923 11:39:59.648313    6944 system_pods.go:89] "kube-proxy-njj9d" [47a01996-aa9d-45b6-90ef-e93fa6bff34b] Running
	I0923 11:39:59.648404    6944 system_pods.go:89] "kube-scheduler-functional-877700" [99b899a7-2a5d-4cfe-a751-8c80b7f4a01c] Running
	I0923 11:39:59.648404    6944 system_pods.go:89] "storage-provisioner" [c5b8b930-03ac-48c2-ab92-e2d2d5d396e4] Running
	I0923 11:39:59.648404    6944 system_pods.go:126] duration metric: took 203.974ms to wait for k8s-apps to be running ...
	I0923 11:39:59.648439    6944 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:39:59.656749    6944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:39:59.681565    6944 system_svc.go:56] duration metric: took 33.1236ms WaitForService to wait for kubelet
	I0923 11:39:59.681565    6944 kubeadm.go:582] duration metric: took 3.3523918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:39:59.681565    6944 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:39:59.840703    6944 request.go:632] Waited for 158.6653ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.157.210:8441/api/v1/nodes
	I0923 11:39:59.840703    6944 round_trippers.go:463] GET https://172.19.157.210:8441/api/v1/nodes
	I0923 11:39:59.840703    6944 round_trippers.go:469] Request Headers:
	I0923 11:39:59.840824    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:39:59.840824    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:39:59.847484    6944 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 11:39:59.847484    6944 round_trippers.go:577] Response Headers:
	I0923 11:39:59.847484    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:39:59.847484    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:39:59.847484    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:39:59.847585    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:39:59.847585    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:40:00 GMT
	I0923 11:39:59.847585    6944 round_trippers.go:580]     Audit-Id: 4ee98004-9eb9-4772-93f3-cd9ab331c558
	I0923 11:39:59.847799    6944 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"596"},"items":[{"metadata":{"name":"functional-877700","uid":"eccee4a1-1135-4d7a-9470-9ff44e843a60","resourceVersion":"528","creationTimestamp":"2024-09-23T11:37:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-877700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"functional-877700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T11_37_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0923 11:39:59.847799    6944 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 11:39:59.847799    6944 node_conditions.go:123] node cpu capacity is 2
	I0923 11:39:59.847799    6944 node_conditions.go:105] duration metric: took 166.2229ms to run NodePressure ...
	I0923 11:39:59.847799    6944 start.go:241] waiting for startup goroutines ...
	I0923 11:40:00.249290    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:40:00.249290    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:40:00.249290    6944 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:40:00.249290    6944 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:40:00.249290    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:40:00.249290    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:40:00.249290    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:40:00.249290    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:40:02.239806    6944 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:40:02.239806    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:40:02.239973    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:40:02.594630    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:40:02.594630    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:40:02.595776    6944 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:40:02.734349    6944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:40:03.152749    6944 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0923 11:40:03.176288    6944 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0923 11:40:03.199597    6944 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0923 11:40:03.220739    6944 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0923 11:40:03.323636    6944 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0923 11:40:03.451593    6944 command_runner.go:130] > pod/storage-provisioner configured
	I0923 11:40:04.584123    6944 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:40:04.584178    6944 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:40:04.584855    6944 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:40:04.731484    6944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:40:04.880429    6944 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0923 11:40:04.881417    6944 round_trippers.go:463] GET https://172.19.157.210:8441/apis/storage.k8s.io/v1/storageclasses
	I0923 11:40:04.881483    6944 round_trippers.go:469] Request Headers:
	I0923 11:40:04.881483    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:40:04.881548    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:40:04.888893    6944 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 11:40:04.888893    6944 round_trippers.go:577] Response Headers:
	I0923 11:40:04.888893    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:40:05 GMT
	I0923 11:40:04.888893    6944 round_trippers.go:580]     Audit-Id: a0b866f0-c236-43d1-95e7-97932b8a260e
	I0923 11:40:04.888893    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:40:04.888893    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:40:04.888893    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:40:04.888893    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:40:04.888893    6944 round_trippers.go:580]     Content-Length: 1273
	I0923 11:40:04.888893    6944 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"602"},"items":[{"metadata":{"name":"standard","uid":"3684e464-f132-4d11-b390-0be9a66a5a7e","resourceVersion":"425","creationTimestamp":"2024-09-23T11:37:28Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T11:37:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0923 11:40:04.888893    6944 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3684e464-f132-4d11-b390-0be9a66a5a7e","resourceVersion":"425","creationTimestamp":"2024-09-23T11:37:28Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T11:37:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0923 11:40:04.888893    6944 round_trippers.go:463] PUT https://172.19.157.210:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 11:40:04.888893    6944 round_trippers.go:469] Request Headers:
	I0923 11:40:04.888893    6944 round_trippers.go:473]     Accept: application/json, */*
	I0923 11:40:04.888893    6944 round_trippers.go:473]     Content-Type: application/json
	I0923 11:40:04.890371    6944 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 11:40:04.893640    6944 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 11:40:04.894176    6944 round_trippers.go:577] Response Headers:
	I0923 11:40:04.894176    6944 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 11365b58-9162-457e-a016-5a81a82d135f
	I0923 11:40:04.894176    6944 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 85f7963a-fac0-496c-9b58-4f01504cb7f1
	I0923 11:40:04.894226    6944 round_trippers.go:580]     Content-Length: 1220
	I0923 11:40:04.894226    6944 round_trippers.go:580]     Date: Mon, 23 Sep 2024 11:40:05 GMT
	I0923 11:40:04.894226    6944 round_trippers.go:580]     Audit-Id: 57246e98-bc4c-413b-b161-99b8b7cb7dcb
	I0923 11:40:04.894226    6944 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 11:40:04.894226    6944 round_trippers.go:580]     Content-Type: application/json
	I0923 11:40:04.894364    6944 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3684e464-f132-4d11-b390-0be9a66a5a7e","resourceVersion":"425","creationTimestamp":"2024-09-23T11:37:28Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T11:37:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0923 11:40:04.897686    6944 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 11:40:04.900714    6944 addons.go:510] duration metric: took 8.5711892s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 11:40:04.900714    6944 start.go:246] waiting for cluster config update ...
	I0923 11:40:04.900714    6944 start.go:255] writing updated cluster config ...
	I0923 11:40:04.909295    6944 ssh_runner.go:195] Run: rm -f paused
	I0923 11:40:05.032069    6944 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:40:05.035698    6944 out.go:177] * Done! kubectl is now configured to use "functional-877700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138247609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138416139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138621075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198271806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198440435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198515748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.199563031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.222966524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223195264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223281379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223640342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:39:43Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981692372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981830996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981859501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981957318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.084899403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085158149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085423195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.087385540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:39:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.514583300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.522875456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523082393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523369543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	033c968960434       c69fa2e9cbf5f       About a minute ago   Running             coredns                   2                   9e117c8aa2e8f       coredns-7c65d6cfc9-68rgs
	9203c2cf5f288       6e38f40d628db       About a minute ago   Running             storage-provisioner       2                   f760ad6f83776       storage-provisioner
	2840d1510bf70       2e96e5913fc06       About a minute ago   Running             etcd                      2                   873b07335931f       etcd-functional-877700
	c219e269d74e8       6bab7719df100       About a minute ago   Running             kube-apiserver            2                   85217232ef302       kube-apiserver-functional-877700
	3db622a1a6cec       175ffd71cce3d       About a minute ago   Running             kube-controller-manager   2                   e1988c7f254dd       kube-controller-manager-functional-877700
	7e690e0c11479       60c005f310ff3       2 minutes ago        Running             kube-proxy                1                   e4559b860c3c9       kube-proxy-njj9d
	7d20cc069f125       9aa1fad941575       2 minutes ago        Running             kube-scheduler            2                   1318e37c62eb1       kube-scheduler-functional-877700
	6c5cbfe07adf1       c69fa2e9cbf5f       2 minutes ago        Exited              coredns                   1                   f338105492d68       coredns-7c65d6cfc9-68rgs
	5cdb3588e9165       6bab7719df100       2 minutes ago        Exited              kube-apiserver            1                   94ebe68eaa345       kube-apiserver-functional-877700
	9b83feae40112       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       1                   b2e0af0c32564       storage-provisioner
	f0fdd3b0500aa       175ffd71cce3d       2 minutes ago        Exited              kube-controller-manager   1                   62acba4787244       kube-controller-manager-functional-877700
	7c21f80b1432a       2e96e5913fc06       2 minutes ago        Exited              etcd                      1                   2ce685dbaa7fc       etcd-functional-877700
	4cd7dfae51eae       9aa1fad941575       2 minutes ago        Exited              kube-scheduler            1                   9593a0bf03ca0       kube-scheduler-functional-877700
	86498544573d6       60c005f310ff3       4 minutes ago        Exited              kube-proxy                0                   99bd9defd2810       kube-proxy-njj9d
	
	
	==> coredns [033c96896043] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 84be67bfc79374dbf0f7b1050900d3b4b08d81a78db730aed13edbe839abc3cb2446f0d06c08690ac53a97ad9f5103fd82097eeb4b4696d252f023888848e6e0
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33850 - 17163 "HINFO IN 4650748462132086382.3757766483949498301. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029208732s
	
	
	==> coredns [6c5cbfe07adf] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 84be67bfc79374dbf0f7b1050900d3b4b08d81a78db730aed13edbe839abc3cb2446f0d06c08690ac53a97ad9f5103fd82097eeb4b4696d252f023888848e6e0
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:55839 - 36546 "HINFO IN 6849054004478210051.6134143501298962243. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029148935s
	
	
	==> describe nodes <==
	Name:               functional-877700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-877700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=functional-877700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_37_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:37:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-877700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 11:41:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:41:15 +0000   Mon, 23 Sep 2024 11:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:41:15 +0000   Mon, 23 Sep 2024 11:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:41:15 +0000   Mon, 23 Sep 2024 11:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:41:15 +0000   Mon, 23 Sep 2024 11:37:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.157.210
	  Hostname:    functional-877700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8009e1ccb8147a59face2899182713b
	  System UUID:                abaa92f9-b9ed-e449-89e8-5b430fa87ce7
	  Boot ID:                    ae7c364a-1347-4f8e-952b-bd0303f14dc2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-68rgs                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m17s
	  kube-system                 etcd-functional-877700                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m25s
	  kube-system                 kube-apiserver-functional-877700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-functional-877700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-proxy-njj9d                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-scheduler-functional-877700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  NodeHasSufficientPID     4m30s (x7 over 4m30s)  kubelet          Node functional-877700 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m30s (x8 over 4m30s)  kubelet          Node functional-877700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m30s (x8 over 4m30s)  kubelet          Node functional-877700 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m22s                  kubelet          Node functional-877700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s                  kubelet          Node functional-877700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s                  kubelet          Node functional-877700 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m19s                  kubelet          Node functional-877700 status is now: NodeReady
	  Normal  RegisteredNode           4m18s                  node-controller  Node functional-877700 event: Registered Node functional-877700 in Controller
	  Normal  Starting                 118s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)    kubelet          Node functional-877700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)    kubelet          Node functional-877700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)    kubelet          Node functional-877700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           111s                   node-controller  Node functional-877700 event: Registered Node functional-877700 in Controller
	
	
	==> dmesg <==
	[  +5.706349] systemd-fstab-generator[1820]: Ignoring "noauto" option for root device
	[  +0.101727] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.521152] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[  +0.112931] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.838114] systemd-fstab-generator[2346]: Ignoring "noauto" option for root device
	[  +0.200232] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.212864] kauditd_printk_skb: 88 callbacks suppressed
	[Sep23 11:38] kauditd_printk_skb: 10 callbacks suppressed
	[Sep23 11:39] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.560403] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.266225] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.259657] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +5.242721] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.039221] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.187714] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.177919] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.247363] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.078027] systemd-fstab-generator[4815]: Ignoring "noauto" option for root device
	[  +0.552186] kauditd_printk_skb: 169 callbacks suppressed
	[  +8.107083] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.047969] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.111564] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.007075] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.332556] systemd-fstab-generator[6642]: Ignoring "noauto" option for root device
	[  +0.159781] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [2840d1510bf7] <==
	{"level":"info","ts":"2024-09-23T11:39:40.612347Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.19.157.210:2380"}
	{"level":"info","ts":"2024-09-23T11:39:40.617506Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.19.157.210:2380"}
	{"level":"info","ts":"2024-09-23T11:39:40.617760Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"66cd91a06148fd26","initial-advertise-peer-urls":["https://172.19.157.210:2380"],"listen-peer-urls":["https://172.19.157.210:2380"],"advertise-client-urls":["https://172.19.157.210:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.157.210:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T11:39:40.618306Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T11:39:40.618566Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"498d38e9e461551a","local-member-id":"66cd91a06148fd26","added-peer-id":"66cd91a06148fd26","added-peer-peer-urls":["https://172.19.157.210:2380"]}
	{"level":"info","ts":"2024-09-23T11:39:40.620505Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"498d38e9e461551a","local-member-id":"66cd91a06148fd26","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:39:40.620604Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:39:40.618587Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:39:40.621349Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:39:41.932411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T11:39:41.932556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:39:41.932823Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 received MsgPreVoteResp from 66cd91a06148fd26 at term 2"}
	{"level":"info","ts":"2024-09-23T11:39:41.933177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T11:39:41.933202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 received MsgVoteResp from 66cd91a06148fd26 at term 3"}
	{"level":"info","ts":"2024-09-23T11:39:41.933221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T11:39:41.933236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 66cd91a06148fd26 elected leader 66cd91a06148fd26 at term 3"}
	{"level":"info","ts":"2024-09-23T11:39:41.946491Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"66cd91a06148fd26","local-member-attributes":"{Name:functional-877700 ClientURLs:[https://172.19.157.210:2379]}","request-path":"/0/members/66cd91a06148fd26/attributes","cluster-id":"498d38e9e461551a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:39:41.946499Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:39:41.946803Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:39:41.946777Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:39:41.947432Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:39:41.947880Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:39:41.948326Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:39:41.948688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:39:41.949356Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.157.210:2379"}
	
	
	==> etcd [7c21f80b1432] <==
	{"level":"info","ts":"2024-09-23T11:39:26.508727Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-23T11:39:26.543095Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"498d38e9e461551a","local-member-id":"66cd91a06148fd26","commit-index":554}
	{"level":"info","ts":"2024-09-23T11:39:26.544349Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-23T11:39:26.544380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 became follower at term 2"}
	{"level":"info","ts":"2024-09-23T11:39:26.544391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 66cd91a06148fd26 [peers: [], term: 2, commit: 554, applied: 0, lastindex: 554, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-23T11:39:26.556263Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-23T11:39:26.588197Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":518}
	{"level":"info","ts":"2024-09-23T11:39:26.610248Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-23T11:39:26.630889Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"66cd91a06148fd26","timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:39:26.639610Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"66cd91a06148fd26"}
	{"level":"info","ts":"2024-09-23T11:39:26.639669Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"66cd91a06148fd26","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-23T11:39:26.639874Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-23T11:39:26.640022Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:39:26.640048Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:39:26.640056Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-23T11:39:26.642375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"66cd91a06148fd26 switched to configuration voters=(7407737080107302182)"}
	{"level":"info","ts":"2024-09-23T11:39:26.642422Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"498d38e9e461551a","local-member-id":"66cd91a06148fd26","added-peer-id":"66cd91a06148fd26","added-peer-peer-urls":["https://172.19.157.210:2380"]}
	{"level":"info","ts":"2024-09-23T11:39:26.642623Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"498d38e9e461551a","local-member-id":"66cd91a06148fd26","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:39:26.642648Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:39:26.668476Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:39:26.696911Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T11:39:26.700396Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.19.157.210:2380"}
	{"level":"info","ts":"2024-09-23T11:39:26.704200Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.19.157.210:2380"}
	{"level":"info","ts":"2024-09-23T11:39:26.705791Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"66cd91a06148fd26","initial-advertise-peer-urls":["https://172.19.157.210:2380"],"listen-peer-urls":["https://172.19.157.210:2380"],"advertise-client-urls":["https://172.19.157.210:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.157.210:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T11:39:26.717481Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> kernel <==
	 11:41:37 up 6 min,  0 users,  load average: 0.25, 0.40, 0.19
	Linux functional-877700 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5cdb3588e916] <==
	I0923 11:39:26.949616       1 options.go:228] external host was not specified, using 172.19.157.210
	I0923 11:39:26.956360       1 server.go:142] Version: v1.31.1
	I0923 11:39:26.956463       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0923 11:39:27.741380       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 11:39:27.742351       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0923 11:39:27.742407       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0923 11:39:27.747999       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:39:27.754409       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0923 11:39:27.754426       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0923 11:39:27.754634       1 instance.go:232] Using reconciler: lease
	W0923 11:39:27.758288       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [c219e269d74e] <==
	I0923 11:39:43.335934       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 11:39:43.337630       1 aggregator.go:171] initial CRD sync complete...
	I0923 11:39:43.337665       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 11:39:43.337672       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 11:39:43.337677       1 cache.go:39] Caches are synced for autoregister controller
	E0923 11:39:43.348508       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 11:39:43.350346       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 11:39:43.355777       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 11:39:43.356003       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 11:39:43.356225       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 11:39:43.357231       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 11:39:43.358763       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 11:39:43.366499       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 11:39:43.378744       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 11:39:43.378913       1 policy_source.go:224] refreshing policies
	I0923 11:39:43.405223       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 11:39:43.425447       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 11:39:44.215578       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 11:39:45.257048       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 11:39:45.273867       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 11:39:45.322751       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 11:39:45.390564       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 11:39:45.404727       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 11:39:47.050593       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 11:39:47.104285       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3db622a1a6ce] <==
	I0923 11:39:46.698291       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0923 11:39:46.698296       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0923 11:39:46.701620       1 shared_informer.go:320] Caches are synced for service account
	I0923 11:39:46.706614       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0923 11:39:46.711115       1 shared_informer.go:320] Caches are synced for namespace
	I0923 11:39:46.712591       1 shared_informer.go:320] Caches are synced for endpoint
	I0923 11:39:46.715556       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0923 11:39:46.716266       1 shared_informer.go:320] Caches are synced for job
	I0923 11:39:46.747521       1 shared_informer.go:320] Caches are synced for taint
	I0923 11:39:46.747586       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0923 11:39:46.747829       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-877700"
	I0923 11:39:46.747959       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0923 11:39:46.770376       1 shared_informer.go:320] Caches are synced for daemon sets
	I0923 11:39:46.818031       1 shared_informer.go:320] Caches are synced for attach detach
	I0923 11:39:46.855785       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:39:46.919673       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 11:39:47.024665       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="326.609356ms"
	I0923 11:39:47.025356       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.608µs"
	I0923 11:39:47.336920       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 11:39:47.376245       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 11:39:47.376302       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0923 11:39:49.079816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.380249ms"
	I0923 11:39:49.080186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="325.864µs"
	I0923 11:40:44.892600       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-877700"
	I0923 11:41:15.440973       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-877700"
	
	
	==> kube-controller-manager [f0fdd3b0500a] <==
	
	
	==> kube-proxy [7e690e0c1147] <==
	E0923 11:39:34.509208       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:39:34.511434       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	E0923 11:39:35.550717       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	E0923 11:39:37.631651       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	I0923 11:39:43.345866       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.157.210"]
	E0923 11:39:43.346014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:39:43.392446       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:39:43.392499       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:39:43.392524       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:39:43.396647       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:39:43.397433       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:39:43.397529       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:39:43.398945       1 config.go:199] "Starting service config controller"
	I0923 11:39:43.398998       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:39:43.399088       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:39:43.399198       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:39:43.399885       1 config.go:328] "Starting node config controller"
	I0923 11:39:43.399917       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:39:43.499206       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:39:43.499308       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:39:43.501332       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [86498544573d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:37:21.928029       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 11:37:21.947867       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.157.210"]
	E0923 11:37:21.948002       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:37:22.049790       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:37:22.049833       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:37:22.049863       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:37:22.076172       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:37:22.076489       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:37:22.076504       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:37:22.079475       1 config.go:199] "Starting service config controller"
	I0923 11:37:22.079520       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:37:22.079709       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:37:22.079715       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:37:22.082736       1 config.go:328] "Starting node config controller"
	I0923 11:37:22.082750       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:37:22.179909       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:37:22.179835       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:37:22.183892       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4cd7dfae51ea] <==
	I0923 11:39:27.949508       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kube-scheduler [7d20cc069f12] <==
	W0923 11:39:37.432681       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://172.19.157.210:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:37.432728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://172.19.157.210:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:37.442525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://172.19.157.210:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:37.442617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://172.19.157.210:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:37.468519       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://172.19.157.210:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:37.468555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://172.19.157.210:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:37.504170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://172.19.157.210:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:37.504204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://172.19.157.210:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:37.840754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.19.157.210:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:37.840877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.19.157.210:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:37.896482       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://172.19.157.210:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:37.896540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://172.19.157.210:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:38.352779       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://172.19.157.210:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:38.352825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://172.19.157.210:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:38.556875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://172.19.157.210:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:38.556926       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://172.19.157.210:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:38.558397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.19.157.210:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:38.558433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.19.157.210:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:38.632293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://172.19.157.210:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:38.632325       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://172.19.157.210:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:39.161367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://172.19.157.210:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:39.161514       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://172.19.157.210:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	W0923 11:39:39.164237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://172.19.157.210:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.19.157.210:8441: connect: connection refused
	E0923 11:39:39.164277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://172.19.157.210:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 172.19.157.210:8441: connect: connection refused" logger="UnhandledError"
	I0923 11:39:47.546787       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 11:39:39 functional-877700 kubelet[6119]: I0923 11:39:39.896631    6119 kubelet_node_status.go:72] "Attempting to register node" node="functional-877700"
	Sep 23 11:39:39 functional-877700 kubelet[6119]: E0923 11:39:39.897821    6119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 172.19.157.210:8441: connect: connection refused" node="functional-877700"
	Sep 23 11:39:39 functional-877700 kubelet[6119]: I0923 11:39:39.934305    6119 scope.go:117] "RemoveContainer" containerID="f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32"
	Sep 23 11:39:39 functional-877700 kubelet[6119]: I0923 11:39:39.968117    6119 scope.go:117] "RemoveContainer" containerID="5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848"
	Sep 23 11:39:39 functional-877700 kubelet[6119]: I0923 11:39:39.986374    6119 scope.go:117] "RemoveContainer" containerID="7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024"
	Sep 23 11:39:40 functional-877700 kubelet[6119]: E0923 11:39:40.102071    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="800ms"
	Sep 23 11:39:40 functional-877700 kubelet[6119]: I0923 11:39:40.299009    6119 kubelet_node_status.go:72] "Attempting to register node" node="functional-877700"
	Sep 23 11:39:40 functional-877700 kubelet[6119]: E0923 11:39:40.299954    6119 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 172.19.157.210:8441: connect: connection refused" node="functional-877700"
	Sep 23 11:39:41 functional-877700 kubelet[6119]: I0923 11:39:41.100788    6119 kubelet_node_status.go:72] "Attempting to register node" node="functional-877700"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.455300    6119 apiserver.go:52] "Watching apiserver"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.487781    6119 kubelet_node_status.go:111] "Node was previously registered" node="functional-877700"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.487972    6119 kubelet_node_status.go:75] "Successfully registered node" node="functional-877700"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.488085    6119 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.489212    6119 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.495265    6119 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.495544    6119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5b8b930-03ac-48c2-ab92-e2d2d5d396e4-tmp\") pod \"storage-provisioner\" (UID: \"c5b8b930-03ac-48c2-ab92-e2d2d5d396e4\") " pod="kube-system/storage-provisioner"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.495655    6119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47a01996-aa9d-45b6-90ef-e93fa6bff34b-lib-modules\") pod \"kube-proxy-njj9d\" (UID: \"47a01996-aa9d-45b6-90ef-e93fa6bff34b\") " pod="kube-system/kube-proxy-njj9d"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.495720    6119 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47a01996-aa9d-45b6-90ef-e93fa6bff34b-xtables-lock\") pod \"kube-proxy-njj9d\" (UID: \"47a01996-aa9d-45b6-90ef-e93fa6bff34b\") " pod="kube-system/kube-proxy-njj9d"
	Sep 23 11:39:43 functional-877700 kubelet[6119]: I0923 11:39:43.760591    6119 scope.go:117] "RemoveContainer" containerID="9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3"
	Sep 23 11:39:49 functional-877700 kubelet[6119]: I0923 11:39:49.038729    6119 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 23 11:40:39 functional-877700 kubelet[6119]: E0923 11:40:39.571223    6119 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 11:40:39 functional-877700 kubelet[6119]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 11:40:39 functional-877700 kubelet[6119]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 11:40:39 functional-877700 kubelet[6119]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 11:40:39 functional-877700 kubelet[6119]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [9203c2cf5f28] <==
	I0923 11:39:44.086769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:39:44.102435       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:39:44.102482       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:40:01.519675       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:40:01.520212       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-877700_78eacdcf-52da-4976-bd1e-42e3f8e64009!
	I0923 11:40:01.520997       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c762c7e-0355-4330-93d4-8fff6cef507d", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-877700_78eacdcf-52da-4976-bd1e-42e3f8e64009 became leader
	I0923 11:40:01.620545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-877700_78eacdcf-52da-4976-bd1e-42e3f8e64009!
	
	
	==> storage-provisioner [9b83feae4011] <==
	I0923 11:39:27.134530       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: (10.3457468s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-877700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (28.91s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (330.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-877700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-877700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m19.1196798s)

                                                
                                                
-- stdout --
	* [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-877700" primary control-plane node in "functional-877700" cluster
	* Updating the running hyperv "functional-877700" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 23 11:36:16 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.363363894Z" level=info msg="Starting up"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.364381436Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.366062085Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.396070486Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421333599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421440830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421570288Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421587008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421667907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421679921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421834109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421929426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421947548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421957860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422282556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422610556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425477453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425563258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425695819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425774515Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425864325Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.426020415Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453345243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453442561Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453474801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453505038Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453531670Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453748134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454565932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454894032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455202408Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455361702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455394342Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455442301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455467531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455493062Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455526603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455676686Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455719839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455745671Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455780914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456112818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456146960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456171390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456195719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456219749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456243578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456268308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456292437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456320171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456342999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456365226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456389456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456422696Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456459942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456484772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456599912Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456726166Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456763512Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456785339Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456808367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456828992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456851820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456870242Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457499810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457780653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458271151Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458406216Z" level=info msg="containerd successfully booted in 0.063489s"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.438240515Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.473584775Z" level=info msg="Loading containers: start."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.634831782Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.851751895Z" level=info msg="Loading containers: done."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874123922Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874156661Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874177084Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874278903Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.968950643Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:17 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.969332588Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:44 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.554614697Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556291346Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556587407Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556810554Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556852062Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:45 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:45 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:45 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.606504166Z" level=info msg="Starting up"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.607566487Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.608690520Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1083
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.636170230Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659914064Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659955972Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659987979Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660000482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660028287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660040390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660182519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660274439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660290442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660300844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660323649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660431771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663679446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663727356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663877987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663961805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663986710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664002713Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664120738Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664205755Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664221759Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664234661Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664246764Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664292974Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664521321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664671252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664703859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664718762Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664734365Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664746668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664757570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664774174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664787276Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664799379Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664809981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664820583Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664838487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664852090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664866093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664877295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664892798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664905901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664916803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664928006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664943709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664956511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664969114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664979916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664990619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665012623Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665031027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665043630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665056732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665171356Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665201862Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665212665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665224367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665234269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665245171Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665254373Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665604346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665818991Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665891906Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665919111Z" level=info msg="containerd successfully booted in 0.030553s"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.653176350Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.675801552Z" level=info msg="Loading containers: start."
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.797816505Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.918274234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.007319036Z" level=info msg="Loading containers: done."
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028686376Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028806601Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064119439Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:47 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064879197Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:54 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.935065116Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937521126Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937778380Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937936813Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937979322Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:55 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:55 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:55 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.988353475Z" level=info msg="Starting up"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.989122935Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.990176454Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1438
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.017499432Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043088049Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043124956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043161464Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043189570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043214075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043226777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043374408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043389611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043405915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043416317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043437321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043535541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048684911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048772030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048893355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048907058Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048925862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048940465Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049080094Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049124003Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049137105Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049149608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049163411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049199118Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049372354Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049445570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049459672Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049470875Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049482677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049493680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049503882Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049515184Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049527787Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049538989Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049549591Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049559293Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049577897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049589499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049605003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049621306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049668716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049680618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049770737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049783840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049795442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049809645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049820347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049830650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049840952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049854054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049872358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049882760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049892962Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049957876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049973979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049984782Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049996884Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050008086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050020589Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050030991Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050284044Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050364160Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050404669Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050421872Z" level=info msg="containerd successfully booted in 0.033699s"
	Sep 23 11:36:57 functional-877700 dockerd[1431]: time="2024-09-23T11:36:57.056326286Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.408774280Z" level=info msg="Loading containers: start."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.555047973Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.673736035Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.765116623Z" level=info msg="Loading containers: done."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790598218Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790686536Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.830332574Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:37:00 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.832121546Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.354009805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358760766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358775765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358863661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362188493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362427881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362723466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.363713216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.413758696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414484559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414540656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414737947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452495445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452537743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452547142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452655437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745032012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745369195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745520387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745802373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789231786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789328681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789386278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789667064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.852945577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853277660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853394154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.854419803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858509897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858696287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858725086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858836580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113489212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113703733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113877050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.119697810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.250500799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251616406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251773022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.252013345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304389586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304456992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304473794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304694515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625584633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625869859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625919364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.626072078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028719988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028812805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028850011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028993837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.072947376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073257230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073388453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073845734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822307240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822465368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822691908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822937552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.096995123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097134647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097148750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097582227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.272189447Z" level=info msg="ignoring event" container=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273020895Z" level=info msg="shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273141316Z" level=warning msg="cleaning up after shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273154519Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.446778855Z" level=info msg="ignoring event" container=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447046502Z" level=info msg="shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447173525Z" level=warning msg="cleaning up after shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447185327Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.683312452Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.829405193Z" level=info msg="ignoring event" container=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829931000Z" level=info msg="shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829990913Z" level=warning msg="cleaning up after shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.830002115Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.854549329Z" level=info msg="ignoring event" container=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854556130Z" level=info msg="shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854794479Z" level=warning msg="cleaning up after shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854869594Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.867096392Z" level=info msg="ignoring event" container=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.866991770Z" level=info msg="shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868159609Z" level=warning msg="cleaning up after shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868225122Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.873572915Z" level=info msg="ignoring event" container=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874484301Z" level=info msg="shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874598524Z" level=warning msg="cleaning up after shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874637232Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887059470Z" level=info msg="ignoring event" container=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887368033Z" level=info msg="shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887425744Z" level=warning msg="cleaning up after shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887435246Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887671094Z" level=info msg="shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887819125Z" level=info msg="ignoring event" container=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887879937Z" level=warning msg="cleaning up after shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.888018065Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907455436Z" level=info msg="ignoring event" container=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907495744Z" level=info msg="ignoring event" container=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925651252Z" level=info msg="shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925962016Z" level=warning msg="cleaning up after shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925973318Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936426853Z" level=info msg="shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936579385Z" level=warning msg="cleaning up after shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936607990Z" level=info msg="ignoring event" container=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936635296Z" level=info msg="ignoring event" container=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936653800Z" level=info msg="ignoring event" container=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936712612Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941574305Z" level=info msg="shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941655421Z" level=warning msg="cleaning up after shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941664923Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949111144Z" level=info msg="shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949309285Z" level=warning msg="cleaning up after shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949319987Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958358133Z" level=info msg="shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958473657Z" level=warning msg="cleaning up after shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958521967Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.968525410Z" level=info msg="shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.968824371Z" level=info msg="ignoring event" container=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969298368Z" level=warning msg="cleaning up after shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969415792Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1431]: time="2024-09-23T11:39:15.769798933Z" level=info msg="ignoring event" container=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771375655Z" level=info msg="shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771441369Z" level=warning msg="cleaning up after shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771451871Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.813293151Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869801842Z" level=info msg="shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869856448Z" level=warning msg="cleaning up after shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869865649Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.870564216Z" level=info msg="ignoring event" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932188905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932979382Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933172501Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933202803Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:39:21 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Consumed 4.872s CPU time.
	Sep 23 11:39:21 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984115697Z" level=info msg="Starting up"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984939583Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.986050598Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4218
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.016036706Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039700313Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039806824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039839228Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039850929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039873232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039883433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040054452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040204468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040224670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040235171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040258474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040353184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045464247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045565559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045977304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046077715Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046108618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046167125Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046524364Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046646378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046666980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046682082Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046696783Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046754290Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047082126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047279447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047378458Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047396660Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047415362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047427264Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047440265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047453267Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047467568Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047478569Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047489270Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047499572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047517474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047531675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047552577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047565479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047576180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047587681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047597882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047608984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047620485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047634286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047644388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047654189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047665990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047678791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047697893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047708595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047722096Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047810506Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047829408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047840109Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047850910Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047860211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047874313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047885714Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048256055Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048443976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048557088Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048575990Z" level=info msg="containerd successfully booted in 0.034003s"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.033830503Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.063758076Z" level=info msg="Loading containers: start."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.280002266Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.409457886Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.509213361Z" level=info msg="Loading containers: done."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543477036Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543685761Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.575120708Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:39:23 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.577119640Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115570481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115625188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115637189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115709298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227172822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227232230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227245531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227362246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.339817796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342223901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342254105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342468032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497814816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497986738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498064248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498360885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.525834667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526017090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526076497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.536907970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750278307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750411323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750425225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750516437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900046084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900773476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902226560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902856440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.939828625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940224975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940315387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.942516766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.185965232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.189330369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193023849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193305085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261234511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261411734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261512547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261673268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428281214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428580653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428743174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.432109011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694521004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694741033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694790839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694956561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.566925297Z" level=info msg="ignoring event" container=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568301726Z" level=info msg="shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568863020Z" level=warning msg="cleaning up after shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.569351401Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573603109Z" level=info msg="shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573696525Z" level=warning msg="cleaning up after shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573707227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.573984273Z" level=info msg="ignoring event" container=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.578903092Z" level=info msg="ignoring event" container=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.579643816Z" level=info msg="shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581215878Z" level=warning msg="cleaning up after shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.581346099Z" level=info msg="ignoring event" container=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581428513Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581951000Z" level=info msg="shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581995407Z" level=warning msg="cleaning up after shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.582004009Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.597570502Z" level=info msg="ignoring event" container=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610389638Z" level=info msg="shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610530461Z" level=warning msg="cleaning up after shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610645881Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.621577202Z" level=info msg="ignoring event" container=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622367133Z" level=info msg="shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622475751Z" level=warning msg="cleaning up after shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622546463Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.635696754Z" level=info msg="shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636175234Z" level=warning msg="cleaning up after shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636360765Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.651928858Z" level=info msg="ignoring event" container=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.652691485Z" level=info msg="shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.660686917Z" level=info msg="ignoring event" container=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.663213638Z" level=info msg="ignoring event" container=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.669991668Z" level=warning msg="cleaning up after shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.670085183Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.660509388Z" level=info msg="shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.679562562Z" level=warning msg="cleaning up after shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.681033207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184736627Z" level=info msg="shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184785835Z" level=warning msg="cleaning up after shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184795637Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.185745696Z" level=info msg="ignoring event" container=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.278984682Z" level=info msg="shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279055694Z" level=warning msg="cleaning up after shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279067096Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.279977748Z" level=info msg="ignoring event" container=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410094699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410458659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410613285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410941440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568446569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568537384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568556887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568657804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.670526533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676344305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676364709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676451423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710366393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710454707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710469310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710551424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732630814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732737932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732870054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.733058486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997622111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997807842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997867052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997990773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094861386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094998109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095017513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095210746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.331029320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333698174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333727979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.334197760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:35 functional-877700 dockerd[4212]: time="2024-09-23T11:39:35.396427521Z" level=error msg="collecting stats for container /k8s_coredns_coredns-7c65d6cfc9-68rgs_kube-system_207034a8-50d8-43ec-b01c-2e0a29efdc66_1: invalid id: "
	Sep 23 11:39:35 functional-877700 dockerd[4212]: 2024/09/23 11:39:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Sep 23 11:39:37 functional-877700 dockerd[4212]: time="2024-09-23T11:39:37.337336680Z" level=info msg="ignoring event" container=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338143319Z" level=info msg="shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338506781Z" level=warning msg="cleaning up after shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338593696Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.137955358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138247609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138416139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138621075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198271806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198440435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198515748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.199563031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.222966524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223195264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223281379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223640342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981692372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981830996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981859501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981957318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.084899403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085158149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085423195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.087385540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.514583300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.522875456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523082393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523369543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.063783746Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:42:57 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.269387669Z" level=info msg="ignoring event" container=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272857372Z" level=info msg="shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272914983Z" level=warning msg="cleaning up after shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272989998Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273759654Z" level=info msg="shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273807364Z" level=warning msg="cleaning up after shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273816366Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274076818Z" level=info msg="ignoring event" container=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274334971Z" level=info msg="ignoring event" container=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281295780Z" level=info msg="shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281347090Z" level=warning msg="cleaning up after shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281429207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.281785979Z" level=info msg="ignoring event" container=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288761591Z" level=info msg="shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288857311Z" level=warning msg="cleaning up after shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288899919Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.289709683Z" level=info msg="ignoring event" container=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.289885119Z" level=info msg="shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290052853Z" level=warning msg="cleaning up after shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290077758Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.301171104Z" level=info msg="shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.301356441Z" level=info msg="ignoring event" container=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.303740624Z" level=info msg="ignoring event" container=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.304055988Z" level=info msg="ignoring event" container=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305020783Z" level=warning msg="cleaning up after shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305125504Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314046710Z" level=info msg="ignoring event" container=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314082117Z" level=info msg="ignoring event" container=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.302995873Z" level=info msg="shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314416185Z" level=warning msg="cleaning up after shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314428687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317021012Z" level=info msg="shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317067522Z" level=warning msg="cleaning up after shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317077724Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.303304235Z" level=info msg="shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323122848Z" level=warning msg="cleaning up after shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323280880Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331804205Z" level=info msg="shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331915728Z" level=warning msg="cleaning up after shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331964638Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.351922178Z" level=info msg="ignoring event" container=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.352121318Z" level=info msg="ignoring event" container=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.352878171Z" level=info msg="shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353018500Z" level=warning msg="cleaning up after shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353272851Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353087514Z" level=info msg="shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366800790Z" level=warning msg="cleaning up after shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366924715Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4212]: time="2024-09-23T11:43:02.178902577Z" level=info msg="ignoring event" container=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.180564113Z" level=info msg="shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.181657335Z" level=warning msg="cleaning up after shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.182298464Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.148744009Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.187274688Z" level=info msg="ignoring event" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187333094Z" level=info msg="shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187370797Z" level=warning msg="cleaning up after shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187380798Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.255786085Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256042511Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256207728Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256269334Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:43:08 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Consumed 9.026s CPU time.
	Sep 23 11:43:08 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:43:08 functional-877700 dockerd[8665]: time="2024-09-23T11:43:08.304403480Z" level=info msg="Starting up"
	Sep 23 11:44:08 functional-877700 dockerd[8665]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 11:44:08 functional-877700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-877700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:761: restart took 2m19.2485988s for "functional-877700" cluster.
I0923 11:44:08.381991    3844 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700: exit status 2 (10.3849667s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs -n 25
E0923 11:45:29.612241    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:46:52.698504    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs -n 25: (2m50.1700854s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:32 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:34 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:34 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-191100                                                         | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:34 UTC |
	| start   | -p functional-877700                                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:38 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-877700                                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:38 UTC | 23 Sep 24 11:40 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | minikube-local-cache-test:functional-877700                              |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache delete                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | minikube-local-cache-test:functional-877700                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	| ssh     | functional-877700 ssh sudo                                               | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-877700                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh                                                    | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache reload                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	| ssh     | functional-877700 ssh                                                    | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-877700 kubectl --                                             | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | --context functional-877700                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-877700                                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:41:49
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:41:49.209678    7852 out.go:345] Setting OutFile to fd 292 ...
	I0923 11:41:49.254977    7852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:41:49.254977    7852 out.go:358] Setting ErrFile to fd 284...
	I0923 11:41:49.254977    7852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:41:49.276647    7852 out.go:352] Setting JSON to false
	I0923 11:41:49.282204    7852 start.go:129] hostinfo: {"hostname":"minikube5","uptime":487685,"bootTime":1726604023,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:41:49.282285    7852 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:41:49.287181    7852 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:41:49.288949    7852 notify.go:220] Checking for updates...
	I0923 11:41:49.290898    7852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:41:49.293889    7852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:41:49.295578    7852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:41:49.302549    7852 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:41:49.308496    7852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:41:49.312968    7852 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:41:49.313645    7852 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:41:53.910812    7852 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:41:53.912248    7852 start.go:297] selected driver: hyperv
	I0923 11:41:53.912248    7852 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:41:53.913184    7852 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:41:53.952383    7852 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:41:53.952383    7852 cni.go:84] Creating CNI manager for ""
	I0923 11:41:53.952383    7852 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:41:53.952383    7852 start.go:340] cluster config:
	{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:41:53.952999    7852 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:41:53.957227    7852 out.go:177] * Starting "functional-877700" primary control-plane node in "functional-877700" cluster
	I0923 11:41:53.959364    7852 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:41:53.959364    7852 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:41:53.959364    7852 cache.go:56] Caching tarball of preloaded images
	I0923 11:41:53.960009    7852 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:41:53.960009    7852 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:41:53.960009    7852 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\config.json ...
	I0923 11:41:53.961599    7852 start.go:360] acquireMachinesLock for functional-877700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:41:53.961599    7852 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-877700"
	I0923 11:41:53.961599    7852 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:41:53.961599    7852 fix.go:54] fixHost starting: 
	I0923 11:41:53.962631    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:41:56.287566    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:41:56.287566    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:41:56.287566    7852 fix.go:112] recreateIfNeeded on functional-877700: state=Running err=<nil>
	W0923 11:41:56.287566    7852 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:41:56.291781    7852 out.go:177] * Updating the running hyperv "functional-877700" VM ...
	I0923 11:41:56.293697    7852 machine.go:93] provisionDockerMachine start ...
	I0923 11:41:56.293697    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:41:58.141209    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:41:58.141209    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:41:58.141860    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:00.334077    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:00.334077    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:00.340169    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:00.340820    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:00.340820    7852 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:42:00.476012    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:42:00.476012    7852 buildroot.go:166] provisioning hostname "functional-877700"
	I0923 11:42:00.476196    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:02.350414    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:02.350414    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:02.350489    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:04.528996    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:04.528996    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:04.532106    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:04.532663    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:04.532663    7852 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-877700 && echo "functional-877700" | sudo tee /etc/hostname
	I0923 11:42:04.695359    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:42:04.695462    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:06.512485    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:06.512485    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:06.512575    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:08.680076    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:08.680076    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:08.685385    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:08.685385    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:08.685385    7852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-877700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-877700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-877700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:42:08.818616    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:42:08.818779    7852 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 11:42:08.818779    7852 buildroot.go:174] setting up certificates
	I0923 11:42:08.818779    7852 provision.go:84] configureAuth start
	I0923 11:42:08.818921    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:12.871773    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:12.871773    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:12.872169    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:16.857888    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:16.857888    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:16.857888    7852 provision.go:143] copyHostCerts
	I0923 11:42:16.859128    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 11:42:16.859128    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 11:42:16.859459    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 11:42:16.860464    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 11:42:16.860464    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 11:42:16.860464    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:42:16.861061    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 11:42:16.861061    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 11:42:16.861668    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 11:42:16.862376    7852 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-877700 san=[127.0.0.1 172.19.157.210 functional-877700 localhost minikube]
	I0923 11:42:17.030195    7852 provision.go:177] copyRemoteCerts
	I0923 11:42:17.038185    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:42:17.038185    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:18.866277    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:18.866277    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:18.866359    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:21.044973    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:21.044973    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:21.045318    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:21.146797    7852 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1083353s)
	I0923 11:42:21.147358    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 11:42:21.190478    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:42:21.235758    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:42:21.278554    7852 provision.go:87] duration metric: took 12.4587259s to configureAuth
	I0923 11:42:21.278554    7852 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:42:21.279490    7852 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:42:21.279632    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:25.288342    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:25.288342    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:25.293586    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:25.294322    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:25.294322    7852 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:42:25.433856    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 11:42:25.433977    7852 buildroot.go:70] root file system type: tmpfs
	I0923 11:42:25.433977    7852 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:42:25.434214    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:27.284575    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:27.284575    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:27.284627    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:29.509670    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:29.509670    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:29.514280    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:29.514512    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:29.514512    7852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:42:29.686040    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:42:29.686110    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:31.546974    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:31.546974    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:31.547611    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:33.788582    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:33.788582    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:33.791693    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:33.792102    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:33.792102    7852 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:42:33.934969    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:42:33.934969    7852 machine.go:96] duration metric: took 37.6387313s to provisionDockerMachine
	I0923 11:42:33.934969    7852 start.go:293] postStartSetup for "functional-877700" (driver="hyperv")
	I0923 11:42:33.934969    7852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:42:33.944284    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:42:33.944798    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:38.034820    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:38.034820    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:38.034934    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:38.139038    7852 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.194305s)
	I0923 11:42:38.150165    7852 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:42:38.158855    7852 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:42:38.158919    7852 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 11:42:38.159371    7852 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 11:42:38.160573    7852 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 11:42:38.161924    7852 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts -> hosts in /etc/test/nested/copy/3844
	I0923 11:42:38.171682    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3844
	I0923 11:42:38.188615    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 11:42:38.227276    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts --> /etc/test/nested/copy/3844/hosts (40 bytes)
	I0923 11:42:38.273146    7852 start.go:296] duration metric: took 4.337884s for postStartSetup
	I0923 11:42:38.273276    7852 fix.go:56] duration metric: took 44.3086851s for fixHost
	I0923 11:42:38.273367    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:40.096277    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:40.096277    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:40.097281    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:42.292209    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:42.292209    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:42.295797    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:42.295797    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:42.295797    7852 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:42:42.422879    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091762.660852161
	
	I0923 11:42:42.422879    7852 fix.go:216] guest clock: 1727091762.660852161
	I0923 11:42:42.422879    7852 fix.go:229] Guest: 2024-09-23 11:42:42.660852161 +0000 UTC Remote: 2024-09-23 11:42:38.273276 +0000 UTC m=+49.132292601 (delta=4.387576161s)
	I0923 11:42:42.423001    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:44.241611    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:44.241611    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:44.241701    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:46.426874    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:46.426874    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:46.431658    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:46.432084    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:46.432084    7852 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727091762
	I0923 11:42:46.574315    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 11:42:42 UTC 2024
	
	I0923 11:42:46.574315    7852 fix.go:236] clock set: Mon Sep 23 11:42:42 UTC 2024
	 (err=<nil>)
	I0923 11:42:46.574315    7852 start.go:83] releasing machines lock for "functional-877700", held for 52.6091639s
	I0923 11:42:46.574614    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:50.628836    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:50.628836    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:50.631851    7852 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:42:50.631923    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:50.638893    7852 ssh_runner.go:195] Run: cat /version.json
	I0923 11:42:50.638893    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:52.529440    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:52.529440    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:52.529534    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:52.530129    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:52.530129    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:52.530309    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:54.892899    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:54.892899    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:54.893451    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:54.922489    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:54.922489    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:54.923227    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:54.987522    7852 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3552765s)
	W0923 11:42:54.987522    7852 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:42:55.011761    7852 ssh_runner.go:235] Completed: cat /version.json: (4.372573s)
	I0923 11:42:55.021068    7852 ssh_runner.go:195] Run: systemctl --version
	I0923 11:42:55.046977    7852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:42:55.055881    7852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:42:55.064287    7852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 11:42:55.080316    7852 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 11:42:55.080316    7852 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:42:55.081946    7852 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:42:55.081946    7852 start.go:495] detecting cgroup driver to use...
	I0923 11:42:55.082192    7852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:42:55.135008    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:42:55.168837    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:42:55.187565    7852 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:42:55.200418    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:42:55.232012    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:42:55.258853    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:42:55.292031    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:42:55.323589    7852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:42:55.352615    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:42:55.382917    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:42:55.411755    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:42:55.438233    7852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:42:55.467842    7852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:42:55.492085    7852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:42:55.741316    7852 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:42:55.772164    7852 start.go:495] detecting cgroup driver to use...
	I0923 11:42:55.778408    7852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:42:55.809605    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:42:55.842637    7852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:42:55.892970    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:42:55.924340    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:42:55.945420    7852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:42:55.988098    7852 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:42:56.004278    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:42:56.020838    7852 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:42:56.062007    7852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:42:56.309274    7852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:42:56.534069    7852 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:42:56.534348    7852 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:42:56.579775    7852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:42:56.828868    7852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:44:08.114305    7852 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2806256s)
	I0923 11:44:08.123742    7852 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0923 11:44:08.204240    7852 out.go:201] 
	W0923 11:44:08.208902    7852 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 23 11:36:16 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.363363894Z" level=info msg="Starting up"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.364381436Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.366062085Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.396070486Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421333599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421440830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421570288Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421587008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421667907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421679921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421834109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421929426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421947548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421957860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422282556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422610556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425477453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425563258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425695819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425774515Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425864325Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.426020415Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453345243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453442561Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453474801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453505038Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453531670Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453748134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454565932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454894032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455202408Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455361702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455394342Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455442301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455467531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455493062Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455526603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455676686Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455719839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455745671Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455780914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456112818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456146960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456171390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456195719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456219749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456243578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456268308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456292437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456320171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456342999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456365226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456389456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456422696Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456459942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456484772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456599912Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456726166Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456763512Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456785339Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456808367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456828992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456851820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456870242Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457499810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457780653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458271151Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458406216Z" level=info msg="containerd successfully booted in 0.063489s"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.438240515Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.473584775Z" level=info msg="Loading containers: start."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.634831782Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.851751895Z" level=info msg="Loading containers: done."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874123922Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874156661Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874177084Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874278903Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.968950643Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:17 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.969332588Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:44 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.554614697Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556291346Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556587407Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556810554Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556852062Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:45 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:45 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:45 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.606504166Z" level=info msg="Starting up"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.607566487Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.608690520Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1083
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.636170230Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659914064Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659955972Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659987979Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660000482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660028287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660040390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660182519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660274439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660290442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660300844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660323649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660431771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663679446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663727356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663877987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663961805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663986710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664002713Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664120738Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664205755Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664221759Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664234661Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664246764Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664292974Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664521321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664671252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664703859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664718762Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664734365Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664746668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664757570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664774174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664787276Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664799379Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664809981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664820583Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664838487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664852090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664866093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664877295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664892798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664905901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664916803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664928006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664943709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664956511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664969114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664979916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664990619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665012623Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665031027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665043630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665056732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665171356Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665201862Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665212665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665224367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665234269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665245171Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665254373Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665604346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665818991Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665891906Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665919111Z" level=info msg="containerd successfully booted in 0.030553s"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.653176350Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.675801552Z" level=info msg="Loading containers: start."
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.797816505Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.918274234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.007319036Z" level=info msg="Loading containers: done."
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028686376Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028806601Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064119439Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:47 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064879197Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:54 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.935065116Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937521126Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937778380Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937936813Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937979322Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:55 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:55 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:55 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.988353475Z" level=info msg="Starting up"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.989122935Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.990176454Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1438
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.017499432Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043088049Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043124956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043161464Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043189570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043214075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043226777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043374408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043389611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043405915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043416317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043437321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043535541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048684911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048772030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048893355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048907058Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048925862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048940465Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049080094Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049124003Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049137105Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049149608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049163411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049199118Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049372354Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049445570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049459672Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049470875Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049482677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049493680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049503882Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049515184Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049527787Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049538989Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049549591Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049559293Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049577897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049589499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049605003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049621306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049668716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049680618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049770737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049783840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049795442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049809645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049820347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049830650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049840952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049854054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049872358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049882760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049892962Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049957876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049973979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049984782Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049996884Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050008086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050020589Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050030991Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050284044Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050364160Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050404669Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050421872Z" level=info msg="containerd successfully booted in 0.033699s"
	Sep 23 11:36:57 functional-877700 dockerd[1431]: time="2024-09-23T11:36:57.056326286Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.408774280Z" level=info msg="Loading containers: start."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.555047973Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.673736035Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.765116623Z" level=info msg="Loading containers: done."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790598218Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790686536Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.830332574Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:37:00 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.832121546Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.354009805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358760766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358775765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358863661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362188493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362427881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362723466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.363713216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.413758696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414484559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414540656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414737947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452495445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452537743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452547142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452655437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745032012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745369195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745520387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745802373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789231786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789328681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789386278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789667064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.852945577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853277660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853394154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.854419803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858509897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858696287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858725086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858836580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113489212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113703733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113877050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.119697810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.250500799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251616406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251773022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.252013345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304389586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304456992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304473794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304694515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625584633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625869859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625919364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.626072078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028719988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028812805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028850011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028993837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.072947376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073257230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073388453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073845734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822307240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822465368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822691908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822937552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.096995123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097134647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097148750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097582227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.272189447Z" level=info msg="ignoring event" container=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273020895Z" level=info msg="shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273141316Z" level=warning msg="cleaning up after shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273154519Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.446778855Z" level=info msg="ignoring event" container=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447046502Z" level=info msg="shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447173525Z" level=warning msg="cleaning up after shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447185327Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.683312452Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.829405193Z" level=info msg="ignoring event" container=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829931000Z" level=info msg="shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829990913Z" level=warning msg="cleaning up after shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.830002115Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.854549329Z" level=info msg="ignoring event" container=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854556130Z" level=info msg="shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854794479Z" level=warning msg="cleaning up after shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854869594Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.867096392Z" level=info msg="ignoring event" container=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.866991770Z" level=info msg="shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868159609Z" level=warning msg="cleaning up after shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868225122Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.873572915Z" level=info msg="ignoring event" container=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874484301Z" level=info msg="shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874598524Z" level=warning msg="cleaning up after shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874637232Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887059470Z" level=info msg="ignoring event" container=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887368033Z" level=info msg="shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887425744Z" level=warning msg="cleaning up after shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887435246Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887671094Z" level=info msg="shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887819125Z" level=info msg="ignoring event" container=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887879937Z" level=warning msg="cleaning up after shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.888018065Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907455436Z" level=info msg="ignoring event" container=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907495744Z" level=info msg="ignoring event" container=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925651252Z" level=info msg="shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925962016Z" level=warning msg="cleaning up after shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925973318Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936426853Z" level=info msg="shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936579385Z" level=warning msg="cleaning up after shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936607990Z" level=info msg="ignoring event" container=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936635296Z" level=info msg="ignoring event" container=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936653800Z" level=info msg="ignoring event" container=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936712612Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941574305Z" level=info msg="shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941655421Z" level=warning msg="cleaning up after shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941664923Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949111144Z" level=info msg="shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949309285Z" level=warning msg="cleaning up after shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949319987Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958358133Z" level=info msg="shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958473657Z" level=warning msg="cleaning up after shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958521967Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.968525410Z" level=info msg="shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.968824371Z" level=info msg="ignoring event" container=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969298368Z" level=warning msg="cleaning up after shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969415792Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1431]: time="2024-09-23T11:39:15.769798933Z" level=info msg="ignoring event" container=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771375655Z" level=info msg="shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771441369Z" level=warning msg="cleaning up after shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771451871Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.813293151Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869801842Z" level=info msg="shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869856448Z" level=warning msg="cleaning up after shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869865649Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.870564216Z" level=info msg="ignoring event" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932188905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932979382Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933172501Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933202803Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:39:21 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Consumed 4.872s CPU time.
	Sep 23 11:39:21 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984115697Z" level=info msg="Starting up"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984939583Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.986050598Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4218
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.016036706Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039700313Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039806824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039839228Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039850929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039873232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039883433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040054452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040204468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040224670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040235171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040258474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040353184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045464247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045565559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045977304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046077715Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046108618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046167125Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046524364Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046646378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046666980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046682082Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046696783Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046754290Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047082126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047279447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047378458Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047396660Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047415362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047427264Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047440265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047453267Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047467568Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047478569Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047489270Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047499572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047517474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047531675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047552577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047565479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047576180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047587681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047597882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047608984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047620485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047634286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047644388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047654189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047665990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047678791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047697893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047708595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047722096Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047810506Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047829408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047840109Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047850910Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047860211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047874313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047885714Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048256055Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048443976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048557088Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048575990Z" level=info msg="containerd successfully booted in 0.034003s"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.033830503Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.063758076Z" level=info msg="Loading containers: start."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.280002266Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.409457886Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.509213361Z" level=info msg="Loading containers: done."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543477036Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543685761Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.575120708Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:39:23 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.577119640Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115570481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115625188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115637189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115709298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227172822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227232230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227245531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227362246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.339817796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342223901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342254105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342468032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497814816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497986738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498064248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498360885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.525834667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526017090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526076497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.536907970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750278307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750411323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750425225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750516437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900046084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900773476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902226560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902856440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.939828625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940224975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940315387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.942516766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.185965232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.189330369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193023849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193305085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261234511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261411734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261512547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261673268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428281214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428580653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428743174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.432109011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694521004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694741033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694790839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694956561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.566925297Z" level=info msg="ignoring event" container=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568301726Z" level=info msg="shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568863020Z" level=warning msg="cleaning up after shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.569351401Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573603109Z" level=info msg="shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573696525Z" level=warning msg="cleaning up after shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573707227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.573984273Z" level=info msg="ignoring event" container=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.578903092Z" level=info msg="ignoring event" container=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.579643816Z" level=info msg="shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581215878Z" level=warning msg="cleaning up after shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.581346099Z" level=info msg="ignoring event" container=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581428513Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581951000Z" level=info msg="shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581995407Z" level=warning msg="cleaning up after shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.582004009Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.597570502Z" level=info msg="ignoring event" container=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610389638Z" level=info msg="shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610530461Z" level=warning msg="cleaning up after shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610645881Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.621577202Z" level=info msg="ignoring event" container=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622367133Z" level=info msg="shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622475751Z" level=warning msg="cleaning up after shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622546463Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.635696754Z" level=info msg="shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636175234Z" level=warning msg="cleaning up after shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636360765Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.651928858Z" level=info msg="ignoring event" container=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.652691485Z" level=info msg="shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.660686917Z" level=info msg="ignoring event" container=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.663213638Z" level=info msg="ignoring event" container=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.669991668Z" level=warning msg="cleaning up after shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.670085183Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.660509388Z" level=info msg="shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.679562562Z" level=warning msg="cleaning up after shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.681033207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184736627Z" level=info msg="shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184785835Z" level=warning msg="cleaning up after shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184795637Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.185745696Z" level=info msg="ignoring event" container=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.278984682Z" level=info msg="shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279055694Z" level=warning msg="cleaning up after shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279067096Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.279977748Z" level=info msg="ignoring event" container=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410094699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410458659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410613285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410941440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568446569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568537384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568556887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568657804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.670526533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676344305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676364709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676451423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710366393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710454707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710469310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710551424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732630814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732737932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732870054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.733058486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997622111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997807842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997867052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997990773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094861386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094998109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095017513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095210746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.331029320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333698174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333727979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.334197760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:35 functional-877700 dockerd[4212]: time="2024-09-23T11:39:35.396427521Z" level=error msg="collecting stats for container /k8s_coredns_coredns-7c65d6cfc9-68rgs_kube-system_207034a8-50d8-43ec-b01c-2e0a29efdc66_1: invalid id: "
	Sep 23 11:39:35 functional-877700 dockerd[4212]: 2024/09/23 11:39:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Sep 23 11:39:37 functional-877700 dockerd[4212]: time="2024-09-23T11:39:37.337336680Z" level=info msg="ignoring event" container=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338143319Z" level=info msg="shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338506781Z" level=warning msg="cleaning up after shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338593696Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.137955358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138247609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138416139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138621075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198271806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198440435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198515748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.199563031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.222966524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223195264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223281379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223640342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981692372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981830996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981859501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981957318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.084899403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085158149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085423195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.087385540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.514583300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.522875456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523082393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523369543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.063783746Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:42:57 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.269387669Z" level=info msg="ignoring event" container=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272857372Z" level=info msg="shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272914983Z" level=warning msg="cleaning up after shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272989998Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273759654Z" level=info msg="shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273807364Z" level=warning msg="cleaning up after shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273816366Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274076818Z" level=info msg="ignoring event" container=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274334971Z" level=info msg="ignoring event" container=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281295780Z" level=info msg="shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281347090Z" level=warning msg="cleaning up after shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281429207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.281785979Z" level=info msg="ignoring event" container=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288761591Z" level=info msg="shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288857311Z" level=warning msg="cleaning up after shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288899919Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.289709683Z" level=info msg="ignoring event" container=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.289885119Z" level=info msg="shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290052853Z" level=warning msg="cleaning up after shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290077758Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.301171104Z" level=info msg="shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.301356441Z" level=info msg="ignoring event" container=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.303740624Z" level=info msg="ignoring event" container=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.304055988Z" level=info msg="ignoring event" container=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305020783Z" level=warning msg="cleaning up after shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305125504Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314046710Z" level=info msg="ignoring event" container=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314082117Z" level=info msg="ignoring event" container=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.302995873Z" level=info msg="shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314416185Z" level=warning msg="cleaning up after shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314428687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317021012Z" level=info msg="shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317067522Z" level=warning msg="cleaning up after shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317077724Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.303304235Z" level=info msg="shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323122848Z" level=warning msg="cleaning up after shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323280880Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331804205Z" level=info msg="shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331915728Z" level=warning msg="cleaning up after shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331964638Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.351922178Z" level=info msg="ignoring event" container=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.352121318Z" level=info msg="ignoring event" container=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.352878171Z" level=info msg="shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353018500Z" level=warning msg="cleaning up after shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353272851Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353087514Z" level=info msg="shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366800790Z" level=warning msg="cleaning up after shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366924715Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4212]: time="2024-09-23T11:43:02.178902577Z" level=info msg="ignoring event" container=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.180564113Z" level=info msg="shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.181657335Z" level=warning msg="cleaning up after shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.182298464Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.148744009Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.187274688Z" level=info msg="ignoring event" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187333094Z" level=info msg="shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187370797Z" level=warning msg="cleaning up after shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187380798Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.255786085Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256042511Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256207728Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256269334Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:43:08 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Consumed 9.026s CPU time.
	Sep 23 11:43:08 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:43:08 functional-877700 dockerd[8665]: time="2024-09-23T11:43:08.304403480Z" level=info msg="Starting up"
	Sep 23 11:44:08 functional-877700 dockerd[8665]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 11:44:08 functional-877700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0923 11:44:08.210247    7852 out.go:270] * 
	W0923 11:44:08.211375    7852 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 11:44:08.216471    7852 out.go:201] 
	
	
	==> Docker <==
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f'"
	Sep 23 11:46:08 functional-877700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 11:46:08 functional-877700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 11:46:08 functional-877700 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32'"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="error getting RW layer size for container ID '5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:46:08 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:46:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-23T11:46:10Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +7.212864] kauditd_printk_skb: 88 callbacks suppressed
	[Sep23 11:38] kauditd_printk_skb: 10 callbacks suppressed
	[Sep23 11:39] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.560403] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.266225] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.259657] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +5.242721] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.039221] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.187714] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.177919] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.247363] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.078027] systemd-fstab-generator[4815]: Ignoring "noauto" option for root device
	[  +0.552186] kauditd_printk_skb: 169 callbacks suppressed
	[  +8.107083] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.047969] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.111564] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.007075] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.332556] systemd-fstab-generator[6642]: Ignoring "noauto" option for root device
	[  +0.159781] kauditd_printk_skb: 3 callbacks suppressed
	[Sep23 11:42] systemd-fstab-generator[8204]: Ignoring "noauto" option for root device
	[  +0.163607] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.439546] systemd-fstab-generator[8239]: Ignoring "noauto" option for root device
	[  +0.232654] systemd-fstab-generator[8251]: Ignoring "noauto" option for root device
	[  +0.268342] systemd-fstab-generator[8265]: Ignoring "noauto" option for root device
	[Sep23 11:43] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:47:09 up 11 min,  0 users,  load average: 0.02, 0.16, 0.15
	Linux functional-877700 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 23 11:46:52 functional-877700 kubelet[6119]: E0923 11:46:52.318449    6119 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-877700.17f7dcd22c1212d6\": dial tcp 172.19.157.210:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-877700.17f7dcd22c1212d6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-877700,UID:d94a2590761a98c126cc01e55566a60c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.19.157.210:8441/readyz\": dial tcp 172.19.157.210:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-877700,},FirstTimestamp:2024-09-23 11:42:57.360499414 +0000 UTC m=+198.058688087,LastTimes
tamp:2024-09-23 11:42:58.360686397 +0000 UTC m=+199.058875070,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-877700,}"
	Sep 23 11:46:54 functional-877700 kubelet[6119]: E0923 11:46:54.557321    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 11:46:54 functional-877700 kubelet[6119]: E0923 11:46:54.676249    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m57.919112438s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Sep 23 11:46:59 functional-877700 kubelet[6119]: I0923 11:46:59.503329    6119 status_manager.go:851] "Failed to get status for pod" podUID="d94a2590761a98c126cc01e55566a60c" pod="kube-system/kube-apiserver-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:46:59 functional-877700 kubelet[6119]: I0923 11:46:59.504246    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:46:59 functional-877700 kubelet[6119]: E0923 11:46:59.676871    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m2.919724836s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Sep 23 11:47:01 functional-877700 kubelet[6119]: E0923 11:47:01.559970    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 11:47:02 functional-877700 kubelet[6119]: E0923 11:47:02.321301    6119 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-877700.17f7dcd22c1212d6\": dial tcp 172.19.157.210:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-877700.17f7dcd22c1212d6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-877700,UID:d94a2590761a98c126cc01e55566a60c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.19.157.210:8441/readyz\": dial tcp 172.19.157.210:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-877700,},FirstTimestamp:2024-09-23 11:42:57.360499414 +0000 UTC m=+198.058688087,LastTimes
tamp:2024-09-23 11:42:58.360686397 +0000 UTC m=+199.058875070,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-877700,}"
	Sep 23 11:47:04 functional-877700 kubelet[6119]: E0923 11:47:04.677175    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m7.920033969s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.562960    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934247    6119 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934296    6119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934363    6119 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934478    6119 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934546    6119 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934574    6119 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934591    6119 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: I0923 11:47:08.934603    6119 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934636    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934656    6119 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934676    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.934694    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.935495    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.935569    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 23 11:47:08 functional-877700 kubelet[6119]: E0923 11:47:08.935970    6119 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:45:08.319530    8152 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:45:08.353610    8152 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:45:08.386049    8152 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:45:08.414740    8152 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:45:08.442760    8152 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:46:08.522627    8152 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:46:08.547784    8152 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:46:08.575404    8152 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: exit status 2 (10.4551021s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-877700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (330.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (120.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-877700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-877700 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (2.1859697s)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-877700 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700: exit status 2 (10.2573378s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs -n 25: (1m37.3025078s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:32 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:33 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:33 UTC | 23 Sep 24 11:34 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-191100 --log_dir                                                  | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:34 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-191100                                                         | nospam-191100     | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:34 UTC |
	| start   | -p functional-877700                                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:34 UTC | 23 Sep 24 11:38 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-877700                                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:38 UTC | 23 Sep 24 11:40 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache add                                              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | minikube-local-cache-test:functional-877700                              |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache delete                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | minikube-local-cache-test:functional-877700                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	| ssh     | functional-877700 ssh sudo                                               | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-877700                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC | 23 Sep 24 11:40 UTC |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh                                                    | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:40 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-877700 cache reload                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	| ssh     | functional-877700 ssh                                                    | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-877700 kubectl --                                             | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | --context functional-877700                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-877700                                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:41:49
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:41:49.209678    7852 out.go:345] Setting OutFile to fd 292 ...
	I0923 11:41:49.254977    7852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:41:49.254977    7852 out.go:358] Setting ErrFile to fd 284...
	I0923 11:41:49.254977    7852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:41:49.276647    7852 out.go:352] Setting JSON to false
	I0923 11:41:49.282204    7852 start.go:129] hostinfo: {"hostname":"minikube5","uptime":487685,"bootTime":1726604023,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:41:49.282285    7852 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:41:49.287181    7852 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:41:49.288949    7852 notify.go:220] Checking for updates...
	I0923 11:41:49.290898    7852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:41:49.293889    7852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:41:49.295578    7852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:41:49.302549    7852 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:41:49.308496    7852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:41:49.312968    7852 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:41:49.313645    7852 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:41:53.910812    7852 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:41:53.912248    7852 start.go:297] selected driver: hyperv
	I0923 11:41:53.912248    7852 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:41:53.913184    7852 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:41:53.952383    7852 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:41:53.952383    7852 cni.go:84] Creating CNI manager for ""
	I0923 11:41:53.952383    7852 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:41:53.952383    7852 start.go:340] cluster config:
	{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:41:53.952999    7852 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:41:53.957227    7852 out.go:177] * Starting "functional-877700" primary control-plane node in "functional-877700" cluster
	I0923 11:41:53.959364    7852 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:41:53.959364    7852 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:41:53.959364    7852 cache.go:56] Caching tarball of preloaded images
	I0923 11:41:53.960009    7852 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:41:53.960009    7852 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:41:53.960009    7852 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\config.json ...
	I0923 11:41:53.961599    7852 start.go:360] acquireMachinesLock for functional-877700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:41:53.961599    7852 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-877700"
	I0923 11:41:53.961599    7852 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:41:53.961599    7852 fix.go:54] fixHost starting: 
	I0923 11:41:53.962631    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:41:56.287566    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:41:56.287566    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:41:56.287566    7852 fix.go:112] recreateIfNeeded on functional-877700: state=Running err=<nil>
	W0923 11:41:56.287566    7852 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:41:56.291781    7852 out.go:177] * Updating the running hyperv "functional-877700" VM ...
	I0923 11:41:56.293697    7852 machine.go:93] provisionDockerMachine start ...
	I0923 11:41:56.293697    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:41:58.141209    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:41:58.141209    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:41:58.141860    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:00.334077    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:00.334077    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:00.340169    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:00.340820    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:00.340820    7852 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:42:00.476012    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:42:00.476012    7852 buildroot.go:166] provisioning hostname "functional-877700"
	I0923 11:42:00.476196    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:02.350414    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:02.350414    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:02.350489    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:04.528996    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:04.528996    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:04.532106    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:04.532663    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:04.532663    7852 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-877700 && echo "functional-877700" | sudo tee /etc/hostname
	I0923 11:42:04.695359    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:42:04.695462    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:06.512485    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:06.512485    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:06.512575    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:08.680076    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:08.680076    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:08.685385    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:08.685385    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:08.685385    7852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-877700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-877700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-877700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:42:08.818616    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:42:08.818779    7852 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 11:42:08.818779    7852 buildroot.go:174] setting up certificates
	I0923 11:42:08.818779    7852 provision.go:84] configureAuth start
	I0923 11:42:08.818921    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:12.871773    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:12.871773    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:12.872169    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:16.857888    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:16.857888    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:16.857888    7852 provision.go:143] copyHostCerts
	I0923 11:42:16.859128    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 11:42:16.859128    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 11:42:16.859459    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 11:42:16.860464    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 11:42:16.860464    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 11:42:16.860464    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:42:16.861061    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 11:42:16.861061    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 11:42:16.861668    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 11:42:16.862376    7852 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-877700 san=[127.0.0.1 172.19.157.210 functional-877700 localhost minikube]
	I0923 11:42:17.030195    7852 provision.go:177] copyRemoteCerts
	I0923 11:42:17.038185    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:42:17.038185    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:18.866277    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:18.866277    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:18.866359    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:21.044973    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:21.044973    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:21.045318    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:21.146797    7852 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1083353s)
	I0923 11:42:21.147358    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 11:42:21.190478    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:42:21.235758    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:42:21.278554    7852 provision.go:87] duration metric: took 12.4587259s to configureAuth
	I0923 11:42:21.278554    7852 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:42:21.279490    7852 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:42:21.279632    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:25.288342    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:25.288342    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:25.293586    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:25.294322    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:25.294322    7852 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:42:25.433856    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 11:42:25.433977    7852 buildroot.go:70] root file system type: tmpfs
	I0923 11:42:25.433977    7852 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:42:25.434214    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:27.284575    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:27.284575    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:27.284627    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:29.509670    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:29.509670    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:29.514280    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:29.514512    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:29.514512    7852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:42:29.686040    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:42:29.686110    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:31.546974    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:31.546974    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:31.547611    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:33.788582    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:33.788582    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:33.791693    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:33.792102    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:33.792102    7852 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:42:33.934969    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:42:33.934969    7852 machine.go:96] duration metric: took 37.6387313s to provisionDockerMachine
	I0923 11:42:33.934969    7852 start.go:293] postStartSetup for "functional-877700" (driver="hyperv")
	I0923 11:42:33.934969    7852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:42:33.944284    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:42:33.944798    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:38.034820    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:38.034820    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:38.034934    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:38.139038    7852 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.194305s)
	I0923 11:42:38.150165    7852 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:42:38.158855    7852 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:42:38.158919    7852 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 11:42:38.159371    7852 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 11:42:38.160573    7852 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 11:42:38.161924    7852 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts -> hosts in /etc/test/nested/copy/3844
	I0923 11:42:38.171682    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3844
	I0923 11:42:38.188615    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 11:42:38.227276    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts --> /etc/test/nested/copy/3844/hosts (40 bytes)
	I0923 11:42:38.273146    7852 start.go:296] duration metric: took 4.337884s for postStartSetup
	I0923 11:42:38.273276    7852 fix.go:56] duration metric: took 44.3086851s for fixHost
	I0923 11:42:38.273367    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:40.096277    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:40.096277    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:40.097281    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:42.292209    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:42.292209    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:42.295797    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:42.295797    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:42.295797    7852 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:42:42.422879    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091762.660852161
	
	I0923 11:42:42.422879    7852 fix.go:216] guest clock: 1727091762.660852161
	I0923 11:42:42.422879    7852 fix.go:229] Guest: 2024-09-23 11:42:42.660852161 +0000 UTC Remote: 2024-09-23 11:42:38.273276 +0000 UTC m=+49.132292601 (delta=4.387576161s)
	I0923 11:42:42.423001    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:44.241611    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:44.241611    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:44.241701    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:46.426874    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:46.426874    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:46.431658    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:46.432084    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:46.432084    7852 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727091762
	I0923 11:42:46.574315    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 11:42:42 UTC 2024
	
	I0923 11:42:46.574315    7852 fix.go:236] clock set: Mon Sep 23 11:42:42 UTC 2024
	 (err=<nil>)
	I0923 11:42:46.574315    7852 start.go:83] releasing machines lock for "functional-877700", held for 52.6091639s
	I0923 11:42:46.574614    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:50.628836    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:50.628836    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:50.631851    7852 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:42:50.631923    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:50.638893    7852 ssh_runner.go:195] Run: cat /version.json
	I0923 11:42:50.638893    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:52.529440    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:52.529440    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:52.529534    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:52.530129    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:52.530129    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:52.530309    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:54.892899    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:54.892899    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:54.893451    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:54.922489    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:54.922489    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:54.923227    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:54.987522    7852 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3552765s)
	W0923 11:42:54.987522    7852 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:42:55.011761    7852 ssh_runner.go:235] Completed: cat /version.json: (4.372573s)
	I0923 11:42:55.021068    7852 ssh_runner.go:195] Run: systemctl --version
	I0923 11:42:55.046977    7852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:42:55.055881    7852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:42:55.064287    7852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 11:42:55.080316    7852 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 11:42:55.080316    7852 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:42:55.081946    7852 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:42:55.081946    7852 start.go:495] detecting cgroup driver to use...
	I0923 11:42:55.082192    7852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:42:55.135008    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:42:55.168837    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:42:55.187565    7852 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:42:55.200418    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:42:55.232012    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:42:55.258853    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:42:55.292031    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:42:55.323589    7852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:42:55.352615    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:42:55.382917    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:42:55.411755    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:42:55.438233    7852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:42:55.467842    7852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:42:55.492085    7852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:42:55.741316    7852 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:42:55.772164    7852 start.go:495] detecting cgroup driver to use...
	I0923 11:42:55.778408    7852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:42:55.809605    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:42:55.842637    7852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:42:55.892970    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:42:55.924340    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:42:55.945420    7852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:42:55.988098    7852 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:42:56.004278    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:42:56.020838    7852 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:42:56.062007    7852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:42:56.309274    7852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:42:56.534069    7852 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:42:56.534348    7852 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:42:56.579775    7852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:42:56.828868    7852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:44:08.114305    7852 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2806256s)
	I0923 11:44:08.123742    7852 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0923 11:44:08.204240    7852 out.go:201] 
	W0923 11:44:08.208902    7852 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 23 11:36:16 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.363363894Z" level=info msg="Starting up"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.364381436Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.366062085Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.396070486Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421333599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421440830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421570288Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421587008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421667907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421679921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421834109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421929426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421947548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421957860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422282556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422610556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425477453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425563258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425695819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425774515Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425864325Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.426020415Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453345243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453442561Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453474801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453505038Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453531670Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453748134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454565932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454894032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455202408Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455361702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455394342Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455442301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455467531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455493062Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455526603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455676686Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455719839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455745671Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455780914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456112818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456146960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456171390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456195719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456219749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456243578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456268308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456292437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456320171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456342999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456365226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456389456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456422696Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456459942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456484772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456599912Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456726166Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456763512Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456785339Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456808367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456828992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456851820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456870242Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457499810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457780653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458271151Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458406216Z" level=info msg="containerd successfully booted in 0.063489s"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.438240515Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.473584775Z" level=info msg="Loading containers: start."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.634831782Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.851751895Z" level=info msg="Loading containers: done."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874123922Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874156661Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874177084Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874278903Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.968950643Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:17 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.969332588Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:44 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.554614697Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556291346Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556587407Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556810554Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556852062Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:45 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:45 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:45 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.606504166Z" level=info msg="Starting up"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.607566487Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.608690520Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1083
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.636170230Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659914064Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659955972Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659987979Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660000482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660028287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660040390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660182519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660274439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660290442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660300844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660323649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660431771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663679446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663727356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663877987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663961805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663986710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664002713Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664120738Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664205755Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664221759Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664234661Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664246764Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664292974Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664521321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664671252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664703859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664718762Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664734365Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664746668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664757570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664774174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664787276Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664799379Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664809981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664820583Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664838487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664852090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664866093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664877295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664892798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664905901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664916803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664928006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664943709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664956511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664969114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664979916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664990619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665012623Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665031027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665043630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665056732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665171356Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665201862Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665212665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665224367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665234269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665245171Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665254373Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665604346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665818991Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665891906Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665919111Z" level=info msg="containerd successfully booted in 0.030553s"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.653176350Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.675801552Z" level=info msg="Loading containers: start."
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.797816505Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.918274234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.007319036Z" level=info msg="Loading containers: done."
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028686376Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028806601Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064119439Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:47 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064879197Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:54 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.935065116Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937521126Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937778380Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937936813Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937979322Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:55 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:55 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:55 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.988353475Z" level=info msg="Starting up"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.989122935Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.990176454Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1438
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.017499432Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043088049Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043124956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043161464Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043189570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043214075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043226777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043374408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043389611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043405915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043416317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043437321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043535541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048684911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048772030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048893355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048907058Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048925862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048940465Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049080094Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049124003Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049137105Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049149608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049163411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049199118Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049372354Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049445570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049459672Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049470875Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049482677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049493680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049503882Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049515184Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049527787Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049538989Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049549591Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049559293Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049577897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049589499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049605003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049621306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049668716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049680618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049770737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049783840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049795442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049809645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049820347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049830650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049840952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049854054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049872358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049882760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049892962Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049957876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049973979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049984782Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049996884Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050008086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050020589Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050030991Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050284044Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050364160Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050404669Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050421872Z" level=info msg="containerd successfully booted in 0.033699s"
	Sep 23 11:36:57 functional-877700 dockerd[1431]: time="2024-09-23T11:36:57.056326286Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.408774280Z" level=info msg="Loading containers: start."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.555047973Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.673736035Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.765116623Z" level=info msg="Loading containers: done."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790598218Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790686536Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.830332574Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:37:00 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.832121546Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.354009805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358760766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358775765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358863661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362188493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362427881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362723466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.363713216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.413758696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414484559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414540656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414737947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452495445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452537743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452547142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452655437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745032012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745369195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745520387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745802373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789231786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789328681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789386278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789667064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.852945577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853277660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853394154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.854419803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858509897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858696287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858725086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858836580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113489212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113703733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113877050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.119697810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.250500799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251616406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251773022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.252013345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304389586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304456992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304473794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304694515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625584633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625869859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625919364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.626072078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028719988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028812805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028850011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028993837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.072947376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073257230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073388453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073845734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822307240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822465368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822691908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822937552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.096995123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097134647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097148750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097582227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.272189447Z" level=info msg="ignoring event" container=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273020895Z" level=info msg="shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273141316Z" level=warning msg="cleaning up after shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273154519Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.446778855Z" level=info msg="ignoring event" container=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447046502Z" level=info msg="shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447173525Z" level=warning msg="cleaning up after shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447185327Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.683312452Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.829405193Z" level=info msg="ignoring event" container=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829931000Z" level=info msg="shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829990913Z" level=warning msg="cleaning up after shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.830002115Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.854549329Z" level=info msg="ignoring event" container=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854556130Z" level=info msg="shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854794479Z" level=warning msg="cleaning up after shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854869594Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.867096392Z" level=info msg="ignoring event" container=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.866991770Z" level=info msg="shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868159609Z" level=warning msg="cleaning up after shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868225122Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.873572915Z" level=info msg="ignoring event" container=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874484301Z" level=info msg="shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874598524Z" level=warning msg="cleaning up after shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874637232Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887059470Z" level=info msg="ignoring event" container=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887368033Z" level=info msg="shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887425744Z" level=warning msg="cleaning up after shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887435246Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887671094Z" level=info msg="shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887819125Z" level=info msg="ignoring event" container=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887879937Z" level=warning msg="cleaning up after shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.888018065Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907455436Z" level=info msg="ignoring event" container=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907495744Z" level=info msg="ignoring event" container=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925651252Z" level=info msg="shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925962016Z" level=warning msg="cleaning up after shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925973318Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936426853Z" level=info msg="shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936579385Z" level=warning msg="cleaning up after shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936607990Z" level=info msg="ignoring event" container=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936635296Z" level=info msg="ignoring event" container=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936653800Z" level=info msg="ignoring event" container=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936712612Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941574305Z" level=info msg="shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941655421Z" level=warning msg="cleaning up after shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941664923Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949111144Z" level=info msg="shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949309285Z" level=warning msg="cleaning up after shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949319987Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958358133Z" level=info msg="shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958473657Z" level=warning msg="cleaning up after shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958521967Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.968525410Z" level=info msg="shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.968824371Z" level=info msg="ignoring event" container=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969298368Z" level=warning msg="cleaning up after shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969415792Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1431]: time="2024-09-23T11:39:15.769798933Z" level=info msg="ignoring event" container=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771375655Z" level=info msg="shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771441369Z" level=warning msg="cleaning up after shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771451871Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.813293151Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869801842Z" level=info msg="shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869856448Z" level=warning msg="cleaning up after shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869865649Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.870564216Z" level=info msg="ignoring event" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932188905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932979382Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933172501Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933202803Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:39:21 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Consumed 4.872s CPU time.
	Sep 23 11:39:21 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984115697Z" level=info msg="Starting up"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984939583Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.986050598Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4218
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.016036706Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039700313Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039806824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039839228Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039850929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039873232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039883433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040054452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040204468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040224670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040235171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040258474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040353184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045464247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045565559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045977304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046077715Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046108618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046167125Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046524364Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046646378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046666980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046682082Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046696783Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046754290Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047082126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047279447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047378458Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047396660Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047415362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047427264Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047440265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047453267Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047467568Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047478569Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047489270Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047499572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047517474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047531675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047552577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047565479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047576180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047587681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047597882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047608984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047620485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047634286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047644388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047654189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047665990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047678791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047697893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047708595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047722096Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047810506Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047829408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047840109Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047850910Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047860211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047874313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047885714Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048256055Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048443976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048557088Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048575990Z" level=info msg="containerd successfully booted in 0.034003s"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.033830503Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.063758076Z" level=info msg="Loading containers: start."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.280002266Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.409457886Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.509213361Z" level=info msg="Loading containers: done."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543477036Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543685761Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.575120708Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:39:23 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.577119640Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115570481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115625188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115637189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115709298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227172822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227232230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227245531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227362246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.339817796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342223901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342254105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342468032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497814816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497986738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498064248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498360885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.525834667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526017090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526076497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.536907970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750278307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750411323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750425225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750516437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900046084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900773476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902226560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902856440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.939828625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940224975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940315387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.942516766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.185965232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.189330369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193023849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193305085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261234511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261411734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261512547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261673268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428281214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428580653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428743174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.432109011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694521004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694741033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694790839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694956561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.566925297Z" level=info msg="ignoring event" container=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568301726Z" level=info msg="shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568863020Z" level=warning msg="cleaning up after shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.569351401Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573603109Z" level=info msg="shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573696525Z" level=warning msg="cleaning up after shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573707227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.573984273Z" level=info msg="ignoring event" container=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.578903092Z" level=info msg="ignoring event" container=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.579643816Z" level=info msg="shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581215878Z" level=warning msg="cleaning up after shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.581346099Z" level=info msg="ignoring event" container=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581428513Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581951000Z" level=info msg="shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581995407Z" level=warning msg="cleaning up after shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.582004009Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.597570502Z" level=info msg="ignoring event" container=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610389638Z" level=info msg="shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610530461Z" level=warning msg="cleaning up after shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610645881Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.621577202Z" level=info msg="ignoring event" container=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622367133Z" level=info msg="shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622475751Z" level=warning msg="cleaning up after shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622546463Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.635696754Z" level=info msg="shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636175234Z" level=warning msg="cleaning up after shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636360765Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.651928858Z" level=info msg="ignoring event" container=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.652691485Z" level=info msg="shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.660686917Z" level=info msg="ignoring event" container=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.663213638Z" level=info msg="ignoring event" container=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.669991668Z" level=warning msg="cleaning up after shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.670085183Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.660509388Z" level=info msg="shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.679562562Z" level=warning msg="cleaning up after shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.681033207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184736627Z" level=info msg="shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184785835Z" level=warning msg="cleaning up after shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184795637Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.185745696Z" level=info msg="ignoring event" container=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.278984682Z" level=info msg="shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279055694Z" level=warning msg="cleaning up after shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279067096Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.279977748Z" level=info msg="ignoring event" container=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410094699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410458659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410613285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410941440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568446569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568537384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568556887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568657804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.670526533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676344305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676364709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676451423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710366393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710454707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710469310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710551424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732630814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732737932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732870054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.733058486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997622111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997807842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997867052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997990773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094861386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094998109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095017513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095210746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.331029320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333698174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333727979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.334197760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:35 functional-877700 dockerd[4212]: time="2024-09-23T11:39:35.396427521Z" level=error msg="collecting stats for container /k8s_coredns_coredns-7c65d6cfc9-68rgs_kube-system_207034a8-50d8-43ec-b01c-2e0a29efdc66_1: invalid id: "
	Sep 23 11:39:35 functional-877700 dockerd[4212]: 2024/09/23 11:39:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Sep 23 11:39:37 functional-877700 dockerd[4212]: time="2024-09-23T11:39:37.337336680Z" level=info msg="ignoring event" container=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338143319Z" level=info msg="shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338506781Z" level=warning msg="cleaning up after shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338593696Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.137955358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138247609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138416139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138621075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198271806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198440435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198515748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.199563031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.222966524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223195264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223281379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223640342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981692372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981830996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981859501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981957318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.084899403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085158149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085423195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.087385540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.514583300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.522875456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523082393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523369543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.063783746Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:42:57 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.269387669Z" level=info msg="ignoring event" container=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272857372Z" level=info msg="shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272914983Z" level=warning msg="cleaning up after shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272989998Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273759654Z" level=info msg="shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273807364Z" level=warning msg="cleaning up after shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273816366Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274076818Z" level=info msg="ignoring event" container=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274334971Z" level=info msg="ignoring event" container=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281295780Z" level=info msg="shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281347090Z" level=warning msg="cleaning up after shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281429207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.281785979Z" level=info msg="ignoring event" container=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288761591Z" level=info msg="shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288857311Z" level=warning msg="cleaning up after shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288899919Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.289709683Z" level=info msg="ignoring event" container=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.289885119Z" level=info msg="shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290052853Z" level=warning msg="cleaning up after shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290077758Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.301171104Z" level=info msg="shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.301356441Z" level=info msg="ignoring event" container=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.303740624Z" level=info msg="ignoring event" container=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.304055988Z" level=info msg="ignoring event" container=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305020783Z" level=warning msg="cleaning up after shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305125504Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314046710Z" level=info msg="ignoring event" container=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314082117Z" level=info msg="ignoring event" container=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.302995873Z" level=info msg="shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314416185Z" level=warning msg="cleaning up after shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314428687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317021012Z" level=info msg="shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317067522Z" level=warning msg="cleaning up after shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317077724Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.303304235Z" level=info msg="shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323122848Z" level=warning msg="cleaning up after shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323280880Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331804205Z" level=info msg="shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331915728Z" level=warning msg="cleaning up after shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331964638Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.351922178Z" level=info msg="ignoring event" container=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.352121318Z" level=info msg="ignoring event" container=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.352878171Z" level=info msg="shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353018500Z" level=warning msg="cleaning up after shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353272851Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353087514Z" level=info msg="shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366800790Z" level=warning msg="cleaning up after shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366924715Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4212]: time="2024-09-23T11:43:02.178902577Z" level=info msg="ignoring event" container=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.180564113Z" level=info msg="shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.181657335Z" level=warning msg="cleaning up after shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.182298464Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.148744009Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.187274688Z" level=info msg="ignoring event" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187333094Z" level=info msg="shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187370797Z" level=warning msg="cleaning up after shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187380798Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.255786085Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256042511Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256207728Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256269334Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:43:08 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Consumed 9.026s CPU time.
	Sep 23 11:43:08 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:43:08 functional-877700 dockerd[8665]: time="2024-09-23T11:43:08.304403480Z" level=info msg="Starting up"
	Sep 23 11:44:08 functional-877700 dockerd[8665]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 11:44:08 functional-877700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0923 11:44:08.210247    7852 out.go:270] * 
	W0923 11:44:08.211375    7852 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 11:44:08.216471    7852 out.go:201] 
	
	
	==> Docker <==
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8'"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="error getting RW layer size for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:48:09 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:48:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1'"
	Sep 23 11:48:09 functional-877700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Sep 23 11:48:09 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:48:09 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-23T11:48:11Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +7.212864] kauditd_printk_skb: 88 callbacks suppressed
	[Sep23 11:38] kauditd_printk_skb: 10 callbacks suppressed
	[Sep23 11:39] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.560403] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.266225] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.259657] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +5.242721] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.039221] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.187714] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.177919] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.247363] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.078027] systemd-fstab-generator[4815]: Ignoring "noauto" option for root device
	[  +0.552186] kauditd_printk_skb: 169 callbacks suppressed
	[  +8.107083] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.047969] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.111564] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.007075] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.332556] systemd-fstab-generator[6642]: Ignoring "noauto" option for root device
	[  +0.159781] kauditd_printk_skb: 3 callbacks suppressed
	[Sep23 11:42] systemd-fstab-generator[8204]: Ignoring "noauto" option for root device
	[  +0.163607] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.439546] systemd-fstab-generator[8239]: Ignoring "noauto" option for root device
	[  +0.232654] systemd-fstab-generator[8251]: Ignoring "noauto" option for root device
	[  +0.268342] systemd-fstab-generator[8265]: Ignoring "noauto" option for root device
	[Sep23 11:43] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:49:09 up 13 min,  0 users,  load average: 0.09, 0.14, 0.14
	Linux functional-877700 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.438166    6119 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.438777    6119 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.438982    6119 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.446606    6119 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.446676    6119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.446775    6119 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.447038    6119 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: I0923 11:49:09.447057    6119 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.447082    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.447106    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.447333    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.447487    6119 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.449148    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.449262    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.449587    6119 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: I0923 11:49:09.502374    6119 status_manager.go:851] "Failed to get status for pod" podUID="d94a2590761a98c126cc01e55566a60c" pod="kube-system/kube-apiserver-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: I0923 11:49:09.503196    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.566820    6119 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: I0923 11:49:09.566918    6119 setters.go:600] "Node became not ready" node="functional-877700" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-23T11:49:09Z","lastTransitionTime":"2024-09-23T11:49:09Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 6m12.809780644s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/run/docker.sock: read: connection reset by peer]"}
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.570922    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T11:49:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T11:49:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T11:49:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T11:49:09Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-23T11:49:09Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 6m12.809780644s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to g
et docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-877700\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700/status?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.572571    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.573284    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.574032    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.574907    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:49:09 functional-877700 kubelet[6119]: E0923 11:49:09.575019    6119 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:48:08.933773    6868 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:48:08.963246    6868 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:48:08.988684    6868 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:48:09.015414    6868 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:48:09.042376    6868 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:48:09.073050    6868 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:48:09.105156    6868 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:48:09.129202    6868 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: exit status 2 (10.6066032s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-877700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (120.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-877700 apply -f testdata\invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-877700 apply -f testdata\invalidsvc.yaml: exit status 1 (4.2030562s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://172.19.157.210:8441/openapi/v2?timeout=32s": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-877700 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (226.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 status: exit status 2 (10.1902877s)

                                                
                                                
-- stdout --
	functional-877700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-877700 status" : exit status 2
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (10.1916984s)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-877700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 status -o json: exit status 2 (10.4965555s)

                                                
                                                
-- stdout --
	{"Name":"functional-877700","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-877700 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700: exit status 2 (10.3444626s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs -n 25: (2m54.4943258s)
helpers_test.go:252: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p functional-877700                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                            |                   |                   |         |                     |                     |
	|         | --wait=all                                                                                          |                   |                   |         |                     |                     |
	| config  | functional-877700 config unset                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh echo                                                                          | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | hello                                                                                               |                   |                   |         |                     |                     |
	| tunnel  | functional-877700 tunnel                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| cp      | functional-877700 cp                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-877700 tunnel                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| config  | functional-877700 config get                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-877700 config set                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | cpus 2                                                                                              |                   |                   |         |                     |                     |
	| config  | functional-877700 config get                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-877700 config unset                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-877700 config get                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-877700 tunnel                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh cat                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| addons  | functional-877700 addons list                                                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	| addons  | functional-877700 addons list                                                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| cp      | functional-877700 cp functional-877700:/home/docker/cp-test.txt                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2601736257\001\cp-test.txt |                   |                   |         |                     |                     |
	| service | functional-877700 service list                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	| ssh     | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-877700 service list                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| cp      | functional-877700 cp                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| service | functional-877700 service                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|         | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:54 UTC |
	|         | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| service | functional-877700                                                                                   | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | service hello-node --url                                                                            |                   |                   |         |                     |                     |
	|         | --format={{.IP}}                                                                                    |                   |                   |         |                     |                     |
	| service | functional-877700 service                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:54 UTC |                     |
	|         | hello-node --url                                                                                    |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:41:49
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:41:49.209678    7852 out.go:345] Setting OutFile to fd 292 ...
	I0923 11:41:49.254977    7852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:41:49.254977    7852 out.go:358] Setting ErrFile to fd 284...
	I0923 11:41:49.254977    7852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:41:49.276647    7852 out.go:352] Setting JSON to false
	I0923 11:41:49.282204    7852 start.go:129] hostinfo: {"hostname":"minikube5","uptime":487685,"bootTime":1726604023,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:41:49.282285    7852 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:41:49.287181    7852 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:41:49.288949    7852 notify.go:220] Checking for updates...
	I0923 11:41:49.290898    7852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:41:49.293889    7852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:41:49.295578    7852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:41:49.302549    7852 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:41:49.308496    7852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:41:49.312968    7852 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:41:49.313645    7852 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:41:53.910812    7852 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:41:53.912248    7852 start.go:297] selected driver: hyperv
	I0923 11:41:53.912248    7852 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:41:53.913184    7852 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:41:53.952383    7852 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:41:53.952383    7852 cni.go:84] Creating CNI manager for ""
	I0923 11:41:53.952383    7852 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:41:53.952383    7852 start.go:340] cluster config:
	{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:41:53.952999    7852 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:41:53.957227    7852 out.go:177] * Starting "functional-877700" primary control-plane node in "functional-877700" cluster
	I0923 11:41:53.959364    7852 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:41:53.959364    7852 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:41:53.959364    7852 cache.go:56] Caching tarball of preloaded images
	I0923 11:41:53.960009    7852 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:41:53.960009    7852 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:41:53.960009    7852 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\config.json ...
	I0923 11:41:53.961599    7852 start.go:360] acquireMachinesLock for functional-877700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:41:53.961599    7852 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-877700"
	I0923 11:41:53.961599    7852 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:41:53.961599    7852 fix.go:54] fixHost starting: 
	I0923 11:41:53.962631    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:41:56.287566    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:41:56.287566    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:41:56.287566    7852 fix.go:112] recreateIfNeeded on functional-877700: state=Running err=<nil>
	W0923 11:41:56.287566    7852 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:41:56.291781    7852 out.go:177] * Updating the running hyperv "functional-877700" VM ...
	I0923 11:41:56.293697    7852 machine.go:93] provisionDockerMachine start ...
	I0923 11:41:56.293697    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:41:58.141209    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:41:58.141209    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:41:58.141860    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:00.334077    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:00.334077    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:00.340169    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:00.340820    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:00.340820    7852 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:42:00.476012    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:42:00.476012    7852 buildroot.go:166] provisioning hostname "functional-877700"
	I0923 11:42:00.476196    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:02.350414    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:02.350414    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:02.350489    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:04.528996    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:04.528996    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:04.532106    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:04.532663    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:04.532663    7852 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-877700 && echo "functional-877700" | sudo tee /etc/hostname
	I0923 11:42:04.695359    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:42:04.695462    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:06.512485    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:06.512485    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:06.512575    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:08.680076    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:08.680076    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:08.685385    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:08.685385    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:08.685385    7852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-877700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-877700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-877700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:42:08.818616    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:42:08.818779    7852 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 11:42:08.818779    7852 buildroot.go:174] setting up certificates
	I0923 11:42:08.818779    7852 provision.go:84] configureAuth start
	I0923 11:42:08.818921    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:12.871773    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:12.871773    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:12.872169    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:16.857888    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:16.857888    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:16.857888    7852 provision.go:143] copyHostCerts
	I0923 11:42:16.859128    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 11:42:16.859128    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 11:42:16.859459    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 11:42:16.860464    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 11:42:16.860464    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 11:42:16.860464    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:42:16.861061    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 11:42:16.861061    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 11:42:16.861668    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 11:42:16.862376    7852 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-877700 san=[127.0.0.1 172.19.157.210 functional-877700 localhost minikube]
	I0923 11:42:17.030195    7852 provision.go:177] copyRemoteCerts
	I0923 11:42:17.038185    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:42:17.038185    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:18.866277    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:18.866277    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:18.866359    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:21.044973    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:21.044973    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:21.045318    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:21.146797    7852 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1083353s)
	I0923 11:42:21.147358    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 11:42:21.190478    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:42:21.235758    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:42:21.278554    7852 provision.go:87] duration metric: took 12.4587259s to configureAuth
	I0923 11:42:21.278554    7852 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:42:21.279490    7852 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:42:21.279632    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:25.288342    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:25.288342    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:25.293586    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:25.294322    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:25.294322    7852 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:42:25.433856    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 11:42:25.433977    7852 buildroot.go:70] root file system type: tmpfs
	I0923 11:42:25.433977    7852 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:42:25.434214    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:27.284575    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:27.284575    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:27.284627    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:29.509670    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:29.509670    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:29.514280    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:29.514512    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:29.514512    7852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:42:29.686040    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:42:29.686110    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:31.546974    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:31.546974    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:31.547611    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:33.788582    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:33.788582    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:33.791693    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:33.792102    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:33.792102    7852 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:42:33.934969    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:42:33.934969    7852 machine.go:96] duration metric: took 37.6387313s to provisionDockerMachine
	I0923 11:42:33.934969    7852 start.go:293] postStartSetup for "functional-877700" (driver="hyperv")
	I0923 11:42:33.934969    7852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:42:33.944284    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:42:33.944798    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:38.034820    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:38.034820    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:38.034934    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:38.139038    7852 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.194305s)
	I0923 11:42:38.150165    7852 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:42:38.158855    7852 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:42:38.158919    7852 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 11:42:38.159371    7852 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 11:42:38.160573    7852 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 11:42:38.161924    7852 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts -> hosts in /etc/test/nested/copy/3844
	I0923 11:42:38.171682    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3844
	I0923 11:42:38.188615    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 11:42:38.227276    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts --> /etc/test/nested/copy/3844/hosts (40 bytes)
	I0923 11:42:38.273146    7852 start.go:296] duration metric: took 4.337884s for postStartSetup
	I0923 11:42:38.273276    7852 fix.go:56] duration metric: took 44.3086851s for fixHost
	I0923 11:42:38.273367    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:40.096277    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:40.096277    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:40.097281    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:42.292209    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:42.292209    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:42.295797    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:42.295797    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:42.295797    7852 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:42:42.422879    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091762.660852161
	
	I0923 11:42:42.422879    7852 fix.go:216] guest clock: 1727091762.660852161
	I0923 11:42:42.422879    7852 fix.go:229] Guest: 2024-09-23 11:42:42.660852161 +0000 UTC Remote: 2024-09-23 11:42:38.273276 +0000 UTC m=+49.132292601 (delta=4.387576161s)
	I0923 11:42:42.423001    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:44.241611    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:44.241611    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:44.241701    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:46.426874    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:46.426874    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:46.431658    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:46.432084    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:46.432084    7852 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727091762
	I0923 11:42:46.574315    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 11:42:42 UTC 2024
	
	I0923 11:42:46.574315    7852 fix.go:236] clock set: Mon Sep 23 11:42:42 UTC 2024
	 (err=<nil>)
	I0923 11:42:46.574315    7852 start.go:83] releasing machines lock for "functional-877700", held for 52.6091639s
	I0923 11:42:46.574614    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:50.628836    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:50.628836    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:50.631851    7852 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:42:50.631923    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:50.638893    7852 ssh_runner.go:195] Run: cat /version.json
	I0923 11:42:50.638893    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:52.529440    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:52.529440    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:52.529534    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:52.530129    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:52.530129    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:52.530309    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:54.892899    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:54.892899    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:54.893451    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:54.922489    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:54.922489    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:54.923227    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:54.987522    7852 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3552765s)
	W0923 11:42:54.987522    7852 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:42:55.011761    7852 ssh_runner.go:235] Completed: cat /version.json: (4.372573s)
	I0923 11:42:55.021068    7852 ssh_runner.go:195] Run: systemctl --version
	I0923 11:42:55.046977    7852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:42:55.055881    7852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:42:55.064287    7852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 11:42:55.080316    7852 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 11:42:55.080316    7852 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:42:55.081946    7852 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:42:55.081946    7852 start.go:495] detecting cgroup driver to use...
	I0923 11:42:55.082192    7852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:42:55.135008    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:42:55.168837    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:42:55.187565    7852 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:42:55.200418    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:42:55.232012    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:42:55.258853    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:42:55.292031    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:42:55.323589    7852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:42:55.352615    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:42:55.382917    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:42:55.411755    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:42:55.438233    7852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:42:55.467842    7852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:42:55.492085    7852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:42:55.741316    7852 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:42:55.772164    7852 start.go:495] detecting cgroup driver to use...
	I0923 11:42:55.778408    7852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:42:55.809605    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:42:55.842637    7852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:42:55.892970    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:42:55.924340    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:42:55.945420    7852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:42:55.988098    7852 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:42:56.004278    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:42:56.020838    7852 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:42:56.062007    7852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:42:56.309274    7852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:42:56.534069    7852 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:42:56.534348    7852 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:42:56.579775    7852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:42:56.828868    7852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:44:08.114305    7852 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2806256s)
	I0923 11:44:08.123742    7852 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0923 11:44:08.204240    7852 out.go:201] 
	W0923 11:44:08.208902    7852 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 23 11:36:16 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.363363894Z" level=info msg="Starting up"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.364381436Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.366062085Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.396070486Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421333599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421440830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421570288Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421587008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421667907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421679921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421834109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421929426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421947548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421957860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422282556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422610556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425477453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425563258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425695819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425774515Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425864325Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.426020415Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453345243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453442561Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453474801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453505038Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453531670Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453748134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454565932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454894032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455202408Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455361702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455394342Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455442301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455467531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455493062Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455526603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455676686Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455719839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455745671Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455780914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456112818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456146960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456171390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456195719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456219749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456243578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456268308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456292437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456320171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456342999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456365226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456389456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456422696Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456459942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456484772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456599912Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456726166Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456763512Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456785339Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456808367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456828992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456851820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456870242Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457499810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457780653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458271151Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458406216Z" level=info msg="containerd successfully booted in 0.063489s"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.438240515Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.473584775Z" level=info msg="Loading containers: start."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.634831782Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.851751895Z" level=info msg="Loading containers: done."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874123922Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874156661Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874177084Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874278903Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.968950643Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:17 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.969332588Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:44 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.554614697Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556291346Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556587407Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556810554Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556852062Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:45 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:45 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:45 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.606504166Z" level=info msg="Starting up"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.607566487Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.608690520Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1083
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.636170230Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659914064Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659955972Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659987979Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660000482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660028287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660040390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660182519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660274439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660290442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660300844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660323649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660431771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663679446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663727356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663877987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663961805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663986710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664002713Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664120738Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664205755Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664221759Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664234661Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664246764Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664292974Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664521321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664671252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664703859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664718762Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664734365Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664746668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664757570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664774174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664787276Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664799379Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664809981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664820583Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664838487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664852090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664866093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664877295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664892798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664905901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664916803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664928006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664943709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664956511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664969114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664979916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664990619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665012623Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665031027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665043630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665056732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665171356Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665201862Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665212665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665224367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665234269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665245171Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665254373Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665604346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665818991Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665891906Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665919111Z" level=info msg="containerd successfully booted in 0.030553s"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.653176350Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.675801552Z" level=info msg="Loading containers: start."
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.797816505Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.918274234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.007319036Z" level=info msg="Loading containers: done."
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028686376Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028806601Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064119439Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:47 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064879197Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:54 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.935065116Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937521126Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937778380Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937936813Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937979322Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:55 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:55 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:55 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.988353475Z" level=info msg="Starting up"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.989122935Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.990176454Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1438
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.017499432Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043088049Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043124956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043161464Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043189570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043214075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043226777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043374408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043389611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043405915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043416317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043437321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043535541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048684911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048772030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048893355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048907058Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048925862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048940465Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049080094Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049124003Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049137105Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049149608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049163411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049199118Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049372354Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049445570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049459672Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049470875Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049482677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049493680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049503882Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049515184Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049527787Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049538989Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049549591Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049559293Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049577897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049589499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049605003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049621306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049668716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049680618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049770737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049783840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049795442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049809645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049820347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049830650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049840952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049854054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049872358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049882760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049892962Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049957876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049973979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049984782Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049996884Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050008086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050020589Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050030991Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050284044Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050364160Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050404669Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050421872Z" level=info msg="containerd successfully booted in 0.033699s"
	Sep 23 11:36:57 functional-877700 dockerd[1431]: time="2024-09-23T11:36:57.056326286Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.408774280Z" level=info msg="Loading containers: start."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.555047973Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.673736035Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.765116623Z" level=info msg="Loading containers: done."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790598218Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790686536Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.830332574Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:37:00 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.832121546Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.354009805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358760766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358775765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358863661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362188493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362427881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362723466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.363713216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.413758696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414484559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414540656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414737947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452495445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452537743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452547142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452655437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745032012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745369195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745520387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745802373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789231786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789328681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789386278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789667064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.852945577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853277660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853394154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.854419803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858509897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858696287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858725086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858836580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113489212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113703733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113877050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.119697810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.250500799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251616406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251773022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.252013345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304389586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304456992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304473794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304694515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625584633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625869859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625919364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.626072078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028719988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028812805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028850011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028993837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.072947376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073257230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073388453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073845734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822307240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822465368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822691908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822937552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.096995123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097134647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097148750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097582227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.272189447Z" level=info msg="ignoring event" container=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273020895Z" level=info msg="shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273141316Z" level=warning msg="cleaning up after shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273154519Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.446778855Z" level=info msg="ignoring event" container=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447046502Z" level=info msg="shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447173525Z" level=warning msg="cleaning up after shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447185327Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.683312452Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.829405193Z" level=info msg="ignoring event" container=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829931000Z" level=info msg="shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829990913Z" level=warning msg="cleaning up after shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.830002115Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.854549329Z" level=info msg="ignoring event" container=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854556130Z" level=info msg="shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854794479Z" level=warning msg="cleaning up after shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854869594Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.867096392Z" level=info msg="ignoring event" container=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.866991770Z" level=info msg="shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868159609Z" level=warning msg="cleaning up after shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868225122Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.873572915Z" level=info msg="ignoring event" container=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874484301Z" level=info msg="shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874598524Z" level=warning msg="cleaning up after shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874637232Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887059470Z" level=info msg="ignoring event" container=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887368033Z" level=info msg="shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887425744Z" level=warning msg="cleaning up after shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887435246Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887671094Z" level=info msg="shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887819125Z" level=info msg="ignoring event" container=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887879937Z" level=warning msg="cleaning up after shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.888018065Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907455436Z" level=info msg="ignoring event" container=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907495744Z" level=info msg="ignoring event" container=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925651252Z" level=info msg="shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925962016Z" level=warning msg="cleaning up after shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925973318Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936426853Z" level=info msg="shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936579385Z" level=warning msg="cleaning up after shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936607990Z" level=info msg="ignoring event" container=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936635296Z" level=info msg="ignoring event" container=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936653800Z" level=info msg="ignoring event" container=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936712612Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941574305Z" level=info msg="shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941655421Z" level=warning msg="cleaning up after shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941664923Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949111144Z" level=info msg="shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949309285Z" level=warning msg="cleaning up after shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949319987Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958358133Z" level=info msg="shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958473657Z" level=warning msg="cleaning up after shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958521967Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.968525410Z" level=info msg="shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.968824371Z" level=info msg="ignoring event" container=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969298368Z" level=warning msg="cleaning up after shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969415792Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1431]: time="2024-09-23T11:39:15.769798933Z" level=info msg="ignoring event" container=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771375655Z" level=info msg="shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771441369Z" level=warning msg="cleaning up after shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771451871Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.813293151Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869801842Z" level=info msg="shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869856448Z" level=warning msg="cleaning up after shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869865649Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.870564216Z" level=info msg="ignoring event" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932188905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932979382Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933172501Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933202803Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:39:21 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Consumed 4.872s CPU time.
	Sep 23 11:39:21 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984115697Z" level=info msg="Starting up"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984939583Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.986050598Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4218
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.016036706Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039700313Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039806824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039839228Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039850929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039873232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039883433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040054452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040204468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040224670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040235171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040258474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040353184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045464247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045565559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045977304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046077715Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046108618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046167125Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046524364Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046646378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046666980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046682082Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046696783Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046754290Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047082126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047279447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047378458Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047396660Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047415362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047427264Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047440265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047453267Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047467568Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047478569Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047489270Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047499572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047517474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047531675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047552577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047565479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047576180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047587681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047597882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047608984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047620485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047634286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047644388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047654189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047665990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047678791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047697893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047708595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047722096Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047810506Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047829408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047840109Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047850910Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047860211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047874313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047885714Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048256055Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048443976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048557088Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048575990Z" level=info msg="containerd successfully booted in 0.034003s"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.033830503Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.063758076Z" level=info msg="Loading containers: start."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.280002266Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.409457886Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.509213361Z" level=info msg="Loading containers: done."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543477036Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543685761Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.575120708Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:39:23 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.577119640Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115570481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115625188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115637189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115709298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227172822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227232230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227245531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227362246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.339817796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342223901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342254105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342468032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497814816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497986738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498064248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498360885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.525834667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526017090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526076497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.536907970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750278307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750411323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750425225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750516437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900046084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900773476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902226560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902856440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.939828625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940224975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940315387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.942516766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.185965232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.189330369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193023849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193305085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261234511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261411734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261512547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261673268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428281214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428580653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428743174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.432109011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694521004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694741033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694790839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694956561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.566925297Z" level=info msg="ignoring event" container=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568301726Z" level=info msg="shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568863020Z" level=warning msg="cleaning up after shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.569351401Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573603109Z" level=info msg="shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573696525Z" level=warning msg="cleaning up after shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573707227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.573984273Z" level=info msg="ignoring event" container=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.578903092Z" level=info msg="ignoring event" container=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.579643816Z" level=info msg="shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581215878Z" level=warning msg="cleaning up after shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.581346099Z" level=info msg="ignoring event" container=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581428513Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581951000Z" level=info msg="shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581995407Z" level=warning msg="cleaning up after shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.582004009Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.597570502Z" level=info msg="ignoring event" container=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610389638Z" level=info msg="shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610530461Z" level=warning msg="cleaning up after shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610645881Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.621577202Z" level=info msg="ignoring event" container=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622367133Z" level=info msg="shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622475751Z" level=warning msg="cleaning up after shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622546463Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.635696754Z" level=info msg="shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636175234Z" level=warning msg="cleaning up after shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636360765Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.651928858Z" level=info msg="ignoring event" container=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.652691485Z" level=info msg="shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.660686917Z" level=info msg="ignoring event" container=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.663213638Z" level=info msg="ignoring event" container=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.669991668Z" level=warning msg="cleaning up after shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.670085183Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.660509388Z" level=info msg="shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.679562562Z" level=warning msg="cleaning up after shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.681033207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184736627Z" level=info msg="shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184785835Z" level=warning msg="cleaning up after shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184795637Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.185745696Z" level=info msg="ignoring event" container=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.278984682Z" level=info msg="shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279055694Z" level=warning msg="cleaning up after shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279067096Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.279977748Z" level=info msg="ignoring event" container=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410094699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410458659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410613285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410941440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568446569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568537384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568556887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568657804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.670526533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676344305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676364709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676451423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710366393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710454707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710469310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710551424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732630814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732737932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732870054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.733058486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997622111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997807842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997867052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997990773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094861386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094998109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095017513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095210746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.331029320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333698174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333727979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.334197760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:35 functional-877700 dockerd[4212]: time="2024-09-23T11:39:35.396427521Z" level=error msg="collecting stats for container /k8s_coredns_coredns-7c65d6cfc9-68rgs_kube-system_207034a8-50d8-43ec-b01c-2e0a29efdc66_1: invalid id: "
	Sep 23 11:39:35 functional-877700 dockerd[4212]: 2024/09/23 11:39:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Sep 23 11:39:37 functional-877700 dockerd[4212]: time="2024-09-23T11:39:37.337336680Z" level=info msg="ignoring event" container=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338143319Z" level=info msg="shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338506781Z" level=warning msg="cleaning up after shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338593696Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.137955358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138247609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138416139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138621075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198271806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198440435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198515748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.199563031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.222966524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223195264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223281379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223640342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981692372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981830996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981859501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981957318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.084899403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085158149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085423195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.087385540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.514583300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.522875456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523082393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523369543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.063783746Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:42:57 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.269387669Z" level=info msg="ignoring event" container=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272857372Z" level=info msg="shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272914983Z" level=warning msg="cleaning up after shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272989998Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273759654Z" level=info msg="shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273807364Z" level=warning msg="cleaning up after shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273816366Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274076818Z" level=info msg="ignoring event" container=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274334971Z" level=info msg="ignoring event" container=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281295780Z" level=info msg="shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281347090Z" level=warning msg="cleaning up after shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281429207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.281785979Z" level=info msg="ignoring event" container=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288761591Z" level=info msg="shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288857311Z" level=warning msg="cleaning up after shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288899919Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.289709683Z" level=info msg="ignoring event" container=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.289885119Z" level=info msg="shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290052853Z" level=warning msg="cleaning up after shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290077758Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.301171104Z" level=info msg="shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.301356441Z" level=info msg="ignoring event" container=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.303740624Z" level=info msg="ignoring event" container=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.304055988Z" level=info msg="ignoring event" container=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305020783Z" level=warning msg="cleaning up after shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305125504Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314046710Z" level=info msg="ignoring event" container=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314082117Z" level=info msg="ignoring event" container=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.302995873Z" level=info msg="shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314416185Z" level=warning msg="cleaning up after shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314428687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317021012Z" level=info msg="shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317067522Z" level=warning msg="cleaning up after shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317077724Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.303304235Z" level=info msg="shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323122848Z" level=warning msg="cleaning up after shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323280880Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331804205Z" level=info msg="shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331915728Z" level=warning msg="cleaning up after shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331964638Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.351922178Z" level=info msg="ignoring event" container=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.352121318Z" level=info msg="ignoring event" container=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.352878171Z" level=info msg="shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353018500Z" level=warning msg="cleaning up after shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353272851Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353087514Z" level=info msg="shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366800790Z" level=warning msg="cleaning up after shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366924715Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4212]: time="2024-09-23T11:43:02.178902577Z" level=info msg="ignoring event" container=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.180564113Z" level=info msg="shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.181657335Z" level=warning msg="cleaning up after shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.182298464Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.148744009Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.187274688Z" level=info msg="ignoring event" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187333094Z" level=info msg="shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187370797Z" level=warning msg="cleaning up after shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187380798Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.255786085Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256042511Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256207728Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256269334Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:43:08 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Consumed 9.026s CPU time.
	Sep 23 11:43:08 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:43:08 functional-877700 dockerd[8665]: time="2024-09-23T11:43:08.304403480Z" level=info msg="Starting up"
	Sep 23 11:44:08 functional-877700 dockerd[8665]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 11:44:08 functional-877700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0923 11:44:08.210247    7852 out.go:270] * 
	W0923 11:44:08.211375    7852 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 11:44:08.216471    7852 out.go:201] 
	
	
	==> Docker <==
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID '5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID '7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error getting RW layer size for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f'"
	Sep 23 11:57:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:57:11Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Sep 23 11:57:11 functional-877700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Sep 23 11:57:11 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:57:11 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-23T11:57:13Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.560403] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.266225] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.259657] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +5.242721] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.039221] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.187714] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.177919] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.247363] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.078027] systemd-fstab-generator[4815]: Ignoring "noauto" option for root device
	[  +0.552186] kauditd_printk_skb: 169 callbacks suppressed
	[  +8.107083] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.047969] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.111564] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.007075] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.332556] systemd-fstab-generator[6642]: Ignoring "noauto" option for root device
	[  +0.159781] kauditd_printk_skb: 3 callbacks suppressed
	[Sep23 11:42] systemd-fstab-generator[8204]: Ignoring "noauto" option for root device
	[  +0.163607] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.439546] systemd-fstab-generator[8239]: Ignoring "noauto" option for root device
	[  +0.232654] systemd-fstab-generator[8251]: Ignoring "noauto" option for root device
	[  +0.268342] systemd-fstab-generator[8265]: Ignoring "noauto" option for root device
	[Sep23 11:43] kauditd_printk_skb: 89 callbacks suppressed
	[Sep23 11:57] systemd-fstab-generator[13188]: Ignoring "noauto" option for root device
	[Sep23 11:58] systemd-fstab-generator[13478]: Ignoring "noauto" option for root device
	[  +0.132871] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:58:12 up 22 min,  0 users,  load average: 0.07, 0.08, 0.09
	Linux functional-877700 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 23 11:57:59 functional-877700 kubelet[6119]: I0923 11:57:59.504306    6119 status_manager.go:851] "Failed to get status for pod" podUID="d94a2590761a98c126cc01e55566a60c" pod="kube-system/kube-apiserver-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:57:59 functional-877700 kubelet[6119]: I0923 11:57:59.505723    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:57:59 functional-877700 kubelet[6119]: E0923 11:57:59.802427    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 15m3.045279875s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 11:57:59 functional-877700 kubelet[6119]: E0923 11:57:59.815056    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 11:58:04 functional-877700 kubelet[6119]: E0923 11:58:04.803114    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 15m8.045974071s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 11:58:06 functional-877700 kubelet[6119]: E0923 11:58:06.763007    6119 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/etcd-functional-877700.17f7dcd24220a3fe\": dial tcp 172.19.157.210:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-877700.17f7dcd24220a3fe  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-877700,UID:1a2024253238820dd6dd104df30a6dbf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-877700,},FirstTimestamp:2024-09-23 11:42:57.73055283 +0000 UTC m=+198.428741603,LastTimestamp:2024-09-23 11:42:59.730379985 +0000 UTC m=+200.
428568758,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-877700,}"
	Sep 23 11:58:06 functional-877700 kubelet[6119]: E0923 11:58:06.816726    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 11:58:09 functional-877700 kubelet[6119]: I0923 11:58:09.503391    6119 status_manager.go:851] "Failed to get status for pod" podUID="d94a2590761a98c126cc01e55566a60c" pod="kube-system/kube-apiserver-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:58:09 functional-877700 kubelet[6119]: I0923 11:58:09.504308    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:58:09 functional-877700 kubelet[6119]: E0923 11:58:09.804623    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 15m13.047487025s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.730138    6119 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.730797    6119 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.730995    6119 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.736290    6119 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.737361    6119 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.738560    6119 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.738602    6119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: I0923 11:58:11.740466    6119 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.736064    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.740626    6119 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.742051    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.743550    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.744425    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.744568    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 23 11:58:11 functional-877700 kubelet[6119]: E0923 11:58:11.744871    6119 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:56:11.066016   12944 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:56:11.117337   12944 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:56:11.144049   12944 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:56:11.179501   12944 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:57:11.285132   12944 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:57:11.315750   12944 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:57:11.346837   12944 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:57:11.379602   12944 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: exit status 2 (10.4329348s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-877700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (226.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (174.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-877700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context functional-877700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.1616154s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.19.157.210:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-877700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-877700 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-877700 describe po hello-node-connect: exit status 1 (2.1764726s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-877700 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-877700 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-877700 logs -l app=hello-node-connect: exit status 1 (2.1677633s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-877700 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-877700 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-877700 describe svc hello-node-connect: exit status 1 (2.1740868s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-877700 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700: exit status 2 (11.158907s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs -n 25: (2m23.910873s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| cache   | functional-877700 cache reload                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	| ssh     | functional-877700 ssh                                                                               | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | sudo crictl inspecti                                                                                |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| cache   | delete                                                                                              | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | registry.k8s.io/pause:3.1                                                                           |                   |                   |         |                     |                     |
	| cache   | delete                                                                                              | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | registry.k8s.io/pause:latest                                                                        |                   |                   |         |                     |                     |
	| kubectl | functional-877700 kubectl --                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC | 23 Sep 24 11:41 UTC |
	|         | --context functional-877700                                                                         |                   |                   |         |                     |                     |
	|         | get pods                                                                                            |                   |                   |         |                     |                     |
	| start   | -p functional-877700                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:41 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                            |                   |                   |         |                     |                     |
	|         | --wait=all                                                                                          |                   |                   |         |                     |                     |
	| config  | functional-877700 config unset                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh echo                                                                          | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | hello                                                                                               |                   |                   |         |                     |                     |
	| tunnel  | functional-877700 tunnel                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| cp      | functional-877700 cp                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-877700 tunnel                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| config  | functional-877700 config get                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-877700 config set                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | cpus 2                                                                                              |                   |                   |         |                     |                     |
	| config  | functional-877700 config get                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-877700 config unset                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| config  | functional-877700 config get                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | cpus                                                                                                |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| tunnel  | functional-877700 tunnel                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| ssh     | functional-877700 ssh cat                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| addons  | functional-877700 addons list                                                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	| addons  | functional-877700 addons list                                                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	| cp      | functional-877700 cp functional-877700:/home/docker/cp-test.txt                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2601736257\001\cp-test.txt |                   |                   |         |                     |                     |
	| service | functional-877700 service list                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	| ssh     | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service | functional-877700 service list                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | -o json                                                                                             |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:41:49
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:41:49.209678    7852 out.go:345] Setting OutFile to fd 292 ...
	I0923 11:41:49.254977    7852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:41:49.254977    7852 out.go:358] Setting ErrFile to fd 284...
	I0923 11:41:49.254977    7852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:41:49.276647    7852 out.go:352] Setting JSON to false
	I0923 11:41:49.282204    7852 start.go:129] hostinfo: {"hostname":"minikube5","uptime":487685,"bootTime":1726604023,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:41:49.282285    7852 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:41:49.287181    7852 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:41:49.288949    7852 notify.go:220] Checking for updates...
	I0923 11:41:49.290898    7852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:41:49.293889    7852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:41:49.295578    7852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:41:49.302549    7852 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:41:49.308496    7852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:41:49.312968    7852 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:41:49.313645    7852 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:41:53.910812    7852 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:41:53.912248    7852 start.go:297] selected driver: hyperv
	I0923 11:41:53.912248    7852 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:41:53.913184    7852 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:41:53.952383    7852 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:41:53.952383    7852 cni.go:84] Creating CNI manager for ""
	I0923 11:41:53.952383    7852 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:41:53.952383    7852 start.go:340] cluster config:
	{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:41:53.952999    7852 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:41:53.957227    7852 out.go:177] * Starting "functional-877700" primary control-plane node in "functional-877700" cluster
	I0923 11:41:53.959364    7852 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:41:53.959364    7852 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:41:53.959364    7852 cache.go:56] Caching tarball of preloaded images
	I0923 11:41:53.960009    7852 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:41:53.960009    7852 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:41:53.960009    7852 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-877700\config.json ...
	I0923 11:41:53.961599    7852 start.go:360] acquireMachinesLock for functional-877700: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:41:53.961599    7852 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-877700"
	I0923 11:41:53.961599    7852 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:41:53.961599    7852 fix.go:54] fixHost starting: 
	I0923 11:41:53.962631    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:41:56.287566    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:41:56.287566    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:41:56.287566    7852 fix.go:112] recreateIfNeeded on functional-877700: state=Running err=<nil>
	W0923 11:41:56.287566    7852 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:41:56.291781    7852 out.go:177] * Updating the running hyperv "functional-877700" VM ...
	I0923 11:41:56.293697    7852 machine.go:93] provisionDockerMachine start ...
	I0923 11:41:56.293697    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:41:58.141209    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:41:58.141209    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:41:58.141860    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:00.334077    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:00.334077    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:00.340169    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:00.340820    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:00.340820    7852 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:42:00.476012    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:42:00.476012    7852 buildroot.go:166] provisioning hostname "functional-877700"
	I0923 11:42:00.476196    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:02.350414    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:02.350414    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:02.350489    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:04.528996    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:04.528996    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:04.532106    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:04.532663    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:04.532663    7852 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-877700 && echo "functional-877700" | sudo tee /etc/hostname
	I0923 11:42:04.695359    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-877700
	
	I0923 11:42:04.695462    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:06.512485    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:06.512485    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:06.512575    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:08.680076    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:08.680076    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:08.685385    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:08.685385    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:08.685385    7852 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-877700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-877700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-877700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:42:08.818616    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:42:08.818779    7852 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 11:42:08.818779    7852 buildroot.go:174] setting up certificates
	I0923 11:42:08.818779    7852 provision.go:84] configureAuth start
	I0923 11:42:08.818921    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:10.642911    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:12.871773    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:12.871773    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:12.872169    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:14.667700    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:16.857888    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:16.857888    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:16.857888    7852 provision.go:143] copyHostCerts
	I0923 11:42:16.859128    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 11:42:16.859128    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 11:42:16.859459    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 11:42:16.860464    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 11:42:16.860464    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 11:42:16.860464    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:42:16.861061    7852 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 11:42:16.861061    7852 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 11:42:16.861668    7852 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 11:42:16.862376    7852 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-877700 san=[127.0.0.1 172.19.157.210 functional-877700 localhost minikube]
	I0923 11:42:17.030195    7852 provision.go:177] copyRemoteCerts
	I0923 11:42:17.038185    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:42:17.038185    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:18.866277    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:18.866277    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:18.866359    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:21.044973    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:21.044973    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:21.045318    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:21.146797    7852 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1083353s)
	I0923 11:42:21.147358    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 11:42:21.190478    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 11:42:21.235758    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:42:21.278554    7852 provision.go:87] duration metric: took 12.4587259s to configureAuth
	I0923 11:42:21.278554    7852 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:42:21.279490    7852 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:42:21.279632    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:23.097700    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:25.288342    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:25.288342    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:25.293586    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:25.294322    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:25.294322    7852 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:42:25.433856    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 11:42:25.433977    7852 buildroot.go:70] root file system type: tmpfs
	I0923 11:42:25.433977    7852 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:42:25.434214    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:27.284575    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:27.284575    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:27.284627    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:29.509670    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:29.509670    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:29.514280    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:29.514512    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:29.514512    7852 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:42:29.686040    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:42:29.686110    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:31.546974    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:31.546974    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:31.547611    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:33.788582    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:33.788582    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:33.791693    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:33.792102    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:33.792102    7852 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:42:33.934969    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:42:33.934969    7852 machine.go:96] duration metric: took 37.6387313s to provisionDockerMachine
	I0923 11:42:33.934969    7852 start.go:293] postStartSetup for "functional-877700" (driver="hyperv")
	I0923 11:42:33.934969    7852 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:42:33.944284    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:42:33.944798    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:35.842578    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:38.034820    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:38.034820    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:38.034934    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:38.139038    7852 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.194305s)
	I0923 11:42:38.150165    7852 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:42:38.158855    7852 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:42:38.158919    7852 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 11:42:38.159371    7852 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 11:42:38.160573    7852 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 11:42:38.161924    7852 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts -> hosts in /etc/test/nested/copy/3844
	I0923 11:42:38.171682    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/3844
	I0923 11:42:38.188615    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 11:42:38.227276    7852 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts --> /etc/test/nested/copy/3844/hosts (40 bytes)
	I0923 11:42:38.273146    7852 start.go:296] duration metric: took 4.337884s for postStartSetup
	I0923 11:42:38.273276    7852 fix.go:56] duration metric: took 44.3086851s for fixHost
	I0923 11:42:38.273367    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:40.096277    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:40.096277    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:40.097281    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:42.292209    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:42.292209    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:42.295797    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:42.295797    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:42.295797    7852 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:42:42.422879    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727091762.660852161
	
	I0923 11:42:42.422879    7852 fix.go:216] guest clock: 1727091762.660852161
	I0923 11:42:42.422879    7852 fix.go:229] Guest: 2024-09-23 11:42:42.660852161 +0000 UTC Remote: 2024-09-23 11:42:38.273276 +0000 UTC m=+49.132292601 (delta=4.387576161s)
	I0923 11:42:42.423001    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:44.241611    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:44.241611    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:44.241701    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:46.426874    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:46.426874    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:46.431658    7852 main.go:141] libmachine: Using SSH client type: native
	I0923 11:42:46.432084    7852 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.157.210 22 <nil> <nil>}
	I0923 11:42:46.432084    7852 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727091762
	I0923 11:42:46.574315    7852 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 11:42:42 UTC 2024
	
	I0923 11:42:46.574315    7852 fix.go:236] clock set: Mon Sep 23 11:42:42 UTC 2024
	 (err=<nil>)
	I0923 11:42:46.574315    7852 start.go:83] releasing machines lock for "functional-877700", held for 52.6091639s
	I0923 11:42:46.574614    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:48.427838    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:50.628836    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:50.628836    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:50.631851    7852 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:42:50.631923    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:50.638893    7852 ssh_runner.go:195] Run: cat /version.json
	I0923 11:42:50.638893    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
	I0923 11:42:52.529440    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:52.529440    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:52.529534    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:52.530129    7852 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 11:42:52.530129    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:52.530309    7852 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
	I0923 11:42:54.892899    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:54.892899    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:54.893451    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:54.922489    7852 main.go:141] libmachine: [stdout =====>] : 172.19.157.210
	
	I0923 11:42:54.922489    7852 main.go:141] libmachine: [stderr =====>] : 
	I0923 11:42:54.923227    7852 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
	I0923 11:42:54.987522    7852 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3552765s)
	W0923 11:42:54.987522    7852 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:42:55.011761    7852 ssh_runner.go:235] Completed: cat /version.json: (4.372573s)
	I0923 11:42:55.021068    7852 ssh_runner.go:195] Run: systemctl --version
	I0923 11:42:55.046977    7852 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:42:55.055881    7852 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:42:55.064287    7852 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 11:42:55.080316    7852 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 11:42:55.080316    7852 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:42:55.081946    7852 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:42:55.081946    7852 start.go:495] detecting cgroup driver to use...
	I0923 11:42:55.082192    7852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:42:55.135008    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:42:55.168837    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:42:55.187565    7852 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:42:55.200418    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:42:55.232012    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:42:55.258853    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:42:55.292031    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:42:55.323589    7852 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:42:55.352615    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:42:55.382917    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:42:55.411755    7852 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:42:55.438233    7852 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:42:55.467842    7852 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:42:55.492085    7852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:42:55.741316    7852 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:42:55.772164    7852 start.go:495] detecting cgroup driver to use...
	I0923 11:42:55.778408    7852 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:42:55.809605    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:42:55.842637    7852 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:42:55.892970    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:42:55.924340    7852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:42:55.945420    7852 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:42:55.988098    7852 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:42:56.004278    7852 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:42:56.020838    7852 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:42:56.062007    7852 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:42:56.309274    7852 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:42:56.534069    7852 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:42:56.534348    7852 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:42:56.579775    7852 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:42:56.828868    7852 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:44:08.114305    7852 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.2806256s)
	I0923 11:44:08.123742    7852 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0923 11:44:08.204240    7852 out.go:201] 
	W0923 11:44:08.208902    7852 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 23 11:36:16 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.363363894Z" level=info msg="Starting up"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.364381436Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:16 functional-877700 dockerd[662]: time="2024-09-23T11:36:16.366062085Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.396070486Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421333599Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421440830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421570288Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421587008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421667907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421679921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421834109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421929426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421947548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.421957860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422282556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.422610556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425477453Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425563258Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425695819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425774515Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.425864325Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.426020415Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453345243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453442561Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453474801Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453505038Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453531670Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.453748134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454565932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.454894032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455202408Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455361702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455394342Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455442301Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455467531Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455493062Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455526603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455676686Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455719839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455745671Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.455780914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456112818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456146960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456171390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456195719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456219749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456243578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456268308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456292437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456320171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456342999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456365226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456389456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456422696Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456459942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456484772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456599912Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456726166Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456763512Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456785339Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456808367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456828992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456851820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.456870242Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457499810Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.457780653Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458271151Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:16 functional-877700 dockerd[668]: time="2024-09-23T11:36:16.458406216Z" level=info msg="containerd successfully booted in 0.063489s"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.438240515Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.473584775Z" level=info msg="Loading containers: start."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.634831782Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.851751895Z" level=info msg="Loading containers: done."
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874123922Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874156661Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874177084Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.874278903Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.968950643Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:17 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:17 functional-877700 dockerd[662]: time="2024-09-23T11:36:17.969332588Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:44 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.554614697Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556291346Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556587407Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556810554Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:44 functional-877700 dockerd[662]: time="2024-09-23T11:36:44.556852062Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:45 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:45 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:45 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.606504166Z" level=info msg="Starting up"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.607566487Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:45 functional-877700 dockerd[1077]: time="2024-09-23T11:36:45.608690520Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1083
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.636170230Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659914064Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659955972Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.659987979Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660000482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660028287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660040390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660182519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660274439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660290442Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660300844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660323649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.660431771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663679446Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663727356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663877987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663961805Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.663986710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664002713Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664120738Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664205755Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664221759Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664234661Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664246764Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664292974Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664521321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664671252Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664703859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664718762Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664734365Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664746668Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664757570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664774174Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664787276Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664799379Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664809981Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664820583Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664838487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664852090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664866093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664877295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664892798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664905901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664916803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664928006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664943709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664956511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664969114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664979916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.664990619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665012623Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665031027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665043630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665056732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665171356Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665201862Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665212665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665224367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665234269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665245171Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665254373Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665604346Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665818991Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665891906Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:45 functional-877700 dockerd[1083]: time="2024-09-23T11:36:45.665919111Z" level=info msg="containerd successfully booted in 0.030553s"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.653176350Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.675801552Z" level=info msg="Loading containers: start."
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.797816505Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:36:46 functional-877700 dockerd[1077]: time="2024-09-23T11:36:46.918274234Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.007319036Z" level=info msg="Loading containers: done."
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028686376Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.028806601Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064119439Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:36:47 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:36:47 functional-877700 dockerd[1077]: time="2024-09-23T11:36:47.064879197Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:36:54 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.935065116Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937521126Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937778380Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937936813Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:36:54 functional-877700 dockerd[1077]: time="2024-09-23T11:36:54.937979322Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:36:55 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:36:55 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:36:55 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.988353475Z" level=info msg="Starting up"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.989122935Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:36:55 functional-877700 dockerd[1431]: time="2024-09-23T11:36:55.990176454Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1438
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.017499432Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043088049Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043124956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043161464Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043189570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043214075Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043226777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043374408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043389611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043405915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043416317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043437321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.043535541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048684911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048772030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048893355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048907058Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048925862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.048940465Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049080094Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049124003Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049137105Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049149608Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049163411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049199118Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049372354Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049445570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049459672Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049470875Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049482677Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049493680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049503882Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049515184Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049527787Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049538989Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049549591Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049559293Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049577897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049589499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049605003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049621306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049668716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049680618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049770737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049783840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049795442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049809645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049820347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049830650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049840952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049854054Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049872358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049882760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049892962Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049957876Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049973979Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049984782Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.049996884Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050008086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050020589Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050030991Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050284044Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050364160Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050404669Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:36:56 functional-877700 dockerd[1438]: time="2024-09-23T11:36:56.050421872Z" level=info msg="containerd successfully booted in 0.033699s"
	Sep 23 11:36:57 functional-877700 dockerd[1431]: time="2024-09-23T11:36:57.056326286Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.408774280Z" level=info msg="Loading containers: start."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.555047973Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.673736035Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.765116623Z" level=info msg="Loading containers: done."
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790598218Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.790686536Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.830332574Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:37:00 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:37:00 functional-877700 dockerd[1431]: time="2024-09-23T11:37:00.832121546Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.354009805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358760766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358775765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.358863661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362188493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362427881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.362723466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.363713216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.413758696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414484559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414540656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.414737947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452495445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452537743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452547142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.452655437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745032012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745369195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745520387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.745802373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789231786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789328681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789386278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.789667064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.852945577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853277660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.853394154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.854419803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858509897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858696287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858725086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:08 functional-877700 dockerd[1438]: time="2024-09-23T11:37:08.858836580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113489212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113703733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.113877050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.119697810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.250500799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251616406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.251773022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.252013345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304389586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304456992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304473794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.304694515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625584633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625869859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.625919364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:21 functional-877700 dockerd[1438]: time="2024-09-23T11:37:21.626072078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028719988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028812805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028850011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.028993837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.072947376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073257230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073388453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:22 functional-877700 dockerd[1438]: time="2024-09-23T11:37:22.073845734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822307240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822465368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822691908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:27 functional-877700 dockerd[1438]: time="2024-09-23T11:37:27.822937552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.096995123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097134647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097148750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:28 functional-877700 dockerd[1438]: time="2024-09-23T11:37:28.097582227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.272189447Z" level=info msg="ignoring event" container=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273020895Z" level=info msg="shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273141316Z" level=warning msg="cleaning up after shim disconnected" id=43f7d20c9f9155eac1f28b535a8a9436446d2776230263e5de0951fa4ff2390e namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.273154519Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1431]: time="2024-09-23T11:37:32.446778855Z" level=info msg="ignoring event" container=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447046502Z" level=info msg="shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447173525Z" level=warning msg="cleaning up after shim disconnected" id=bbfc022eb86ccbf4a5db7ec63595428a55623f6b106a497a376d669d4d5dd627 namespace=moby
	Sep 23 11:37:32 functional-877700 dockerd[1438]: time="2024-09-23T11:37:32.447185327Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.683312452Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.829405193Z" level=info msg="ignoring event" container=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829931000Z" level=info msg="shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.829990913Z" level=warning msg="cleaning up after shim disconnected" id=023338df5e0bc01b318d03acc989800d5c6553cce275c948c87e8f390bf6fc7f namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.830002115Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.854549329Z" level=info msg="ignoring event" container=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854556130Z" level=info msg="shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854794479Z" level=warning msg="cleaning up after shim disconnected" id=2f4c688acdf794dfd879b494e5ab67c8b1e5b5378c5743b524c40f52191e6cf6 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.854869594Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.867096392Z" level=info msg="ignoring event" container=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.866991770Z" level=info msg="shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868159609Z" level=warning msg="cleaning up after shim disconnected" id=14d205533a2b3ce4bff69158e5baeb2b1f8d31516b0f43a6113c944e62fa5f87 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.868225122Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.873572915Z" level=info msg="ignoring event" container=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874484301Z" level=info msg="shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874598524Z" level=warning msg="cleaning up after shim disconnected" id=f16ac040529feac868942d7acc3332482d93151c4f00542391a1bc2601e330ee namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.874637232Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887059470Z" level=info msg="ignoring event" container=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887368033Z" level=info msg="shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887425744Z" level=warning msg="cleaning up after shim disconnected" id=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887435246Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887671094Z" level=info msg="shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.887819125Z" level=info msg="ignoring event" container=86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.887879937Z" level=warning msg="cleaning up after shim disconnected" id=b3b1c0d74fa8634aca5787ba2a5eb17227692dce9b39671c3eb2cddd41f39bb0 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.888018065Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907455436Z" level=info msg="ignoring event" container=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.907495744Z" level=info msg="ignoring event" container=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925651252Z" level=info msg="shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925962016Z" level=warning msg="cleaning up after shim disconnected" id=fa882d59aaf70ca431e535aa1c0cdfa5e1b1482745d403f37cdee2bc3f5d1697 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.925973318Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936426853Z" level=info msg="shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936579385Z" level=warning msg="cleaning up after shim disconnected" id=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936607990Z" level=info msg="ignoring event" container=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936635296Z" level=info msg="ignoring event" container=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.936653800Z" level=info msg="ignoring event" container=8315b33ac875cdea0310a206980f7f346954739e57088ea9a28fadfff4436d1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.936712612Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941574305Z" level=info msg="shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941655421Z" level=warning msg="cleaning up after shim disconnected" id=0991f143c31e012d4f7025acb83aeb867d26a13f0b4d6531dfceab17057d613b namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.941664923Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949111144Z" level=info msg="shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949309285Z" level=warning msg="cleaning up after shim disconnected" id=a309e060ac61ca2100d557e21ae40ed667399cc8d1d583371955ba61588bffc4 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.949319987Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958358133Z" level=info msg="shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958473657Z" level=warning msg="cleaning up after shim disconnected" id=99bd9defd281076c1b96ab701e834c703e173d8f0f0972bc068e7bb5185af5e2 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.958521967Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.968525410Z" level=info msg="shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1431]: time="2024-09-23T11:39:10.968824371Z" level=info msg="ignoring event" container=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969298368Z" level=warning msg="cleaning up after shim disconnected" id=53b80274c7f70d7ff25f96da47390894aa5a5547eb016d28f61b7e380a136da7 namespace=moby
	Sep 23 11:39:10 functional-877700 dockerd[1438]: time="2024-09-23T11:39:10.969415792Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1431]: time="2024-09-23T11:39:15.769798933Z" level=info msg="ignoring event" container=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771375655Z" level=info msg="shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771441369Z" level=warning msg="cleaning up after shim disconnected" id=7f27ce21cc9a13cf5c17c8cd3782be374d28396e7fb54d9db73bcf1582c185fd namespace=moby
	Sep 23 11:39:15 functional-877700 dockerd[1438]: time="2024-09-23T11:39:15.771451871Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.813293151Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869801842Z" level=info msg="shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869856448Z" level=warning msg="cleaning up after shim disconnected" id=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1438]: time="2024-09-23T11:39:20.869865649Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.870564216Z" level=info msg="ignoring event" container=cc1a8c14f137e8abf8efdbced6f03756a0e3425b185db56b74b032ef4c46616a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932188905Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.932979382Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933172501Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:39:20 functional-877700 dockerd[1431]: time="2024-09-23T11:39:20.933202803Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:39:21 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:39:21 functional-877700 systemd[1]: docker.service: Consumed 4.872s CPU time.
	Sep 23 11:39:21 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984115697Z" level=info msg="Starting up"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.984939583Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 11:39:21 functional-877700 dockerd[4212]: time="2024-09-23T11:39:21.986050598Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=4218
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.016036706Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039700313Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039806824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039839228Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039850929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039873232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.039883433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040054452Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040204468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040224670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040235171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040258474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.040353184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045464247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045565559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.045977304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046077715Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046108618Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046167125Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046524364Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046646378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046666980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046682082Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046696783Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.046754290Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047082126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047279447Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047378458Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047396660Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047415362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047427264Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047440265Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047453267Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047467568Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047478569Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047489270Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047499572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047517474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047531675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047552577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047565479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047576180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047587681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047597882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047608984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047620485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047634286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047644388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047654189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047665990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047678791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047697893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047708595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047722096Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047810506Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047829408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047840109Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047850910Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047860211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047874313Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.047885714Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048256055Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048443976Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048557088Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 11:39:22 functional-877700 dockerd[4218]: time="2024-09-23T11:39:22.048575990Z" level=info msg="containerd successfully booted in 0.034003s"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.033830503Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.063758076Z" level=info msg="Loading containers: start."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.280002266Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.409457886Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.509213361Z" level=info msg="Loading containers: done."
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543477036Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.543685761Z" level=info msg="Daemon has completed initialization"
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.575120708Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 11:39:23 functional-877700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 11:39:23 functional-877700 dockerd[4212]: time="2024-09-23T11:39:23.577119640Z" level=info msg="API listen on [::]:2376"
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115570481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115625188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115637189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.115709298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227172822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227232230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227245531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.227362246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.339817796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342223901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342254105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.342468032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497814816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.497986738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498064248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.498360885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.525834667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526017090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.526076497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.536907970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750278307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750411323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750425225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.750516437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900046084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.900773476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902226560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.902856440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.939828625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940224975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.940315387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:25 functional-877700 dockerd[4218]: time="2024-09-23T11:39:25.942516766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.185965232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.189330369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193023849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.193305085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261234511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261411734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261512547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.261673268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428281214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428580653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.428743174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.432109011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694521004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694741033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694790839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:26 functional-877700 dockerd[4218]: time="2024-09-23T11:39:26.694956561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.566925297Z" level=info msg="ignoring event" container=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568301726Z" level=info msg="shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.568863020Z" level=warning msg="cleaning up after shim disconnected" id=7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.569351401Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573603109Z" level=info msg="shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573696525Z" level=warning msg="cleaning up after shim disconnected" id=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.573707227Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.573984273Z" level=info msg="ignoring event" container=b2e0af0c325649814dddcac2786f6d80b113c1cc62596738a75bd855e69508a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.578903092Z" level=info msg="ignoring event" container=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.579643816Z" level=info msg="shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581215878Z" level=warning msg="cleaning up after shim disconnected" id=2ce685dbaa7fc1aa87d7ed39a7d8f7de8cde519599f8e3113fa981e2fb1cfac8 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.581346099Z" level=info msg="ignoring event" container=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581428513Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581951000Z" level=info msg="shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.581995407Z" level=warning msg="cleaning up after shim disconnected" id=94ebe68eaa345a7056b4455b7ba5928081ed4647bd5de44fc38adc216bf61ef4 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.582004009Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.597570502Z" level=info msg="ignoring event" container=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610389638Z" level=info msg="shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610530461Z" level=warning msg="cleaning up after shim disconnected" id=9593a0bf03ca074a54ab9a23518f9f5a1007b4453b38dfcc9d304b81be731b94 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.610645881Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.621577202Z" level=info msg="ignoring event" container=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622367133Z" level=info msg="shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622475751Z" level=warning msg="cleaning up after shim disconnected" id=f338105492d684e1fde6376aff6d8235f066f0d4caf9827ab5b405792617b25b namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.622546463Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.635696754Z" level=info msg="shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636175234Z" level=warning msg="cleaning up after shim disconnected" id=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.636360765Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.651928858Z" level=info msg="ignoring event" container=9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.652691485Z" level=info msg="shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.660686917Z" level=info msg="ignoring event" container=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4212]: time="2024-09-23T11:39:27.663213638Z" level=info msg="ignoring event" container=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.669991668Z" level=warning msg="cleaning up after shim disconnected" id=62acba4787244e8bfec11a9869983cf45fa76d82268d8102de387d24ca5b531e namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.670085183Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.660509388Z" level=info msg="shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.679562562Z" level=warning msg="cleaning up after shim disconnected" id=f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32 namespace=moby
	Sep 23 11:39:27 functional-877700 dockerd[4218]: time="2024-09-23T11:39:27.681033207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184736627Z" level=info msg="shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184785835Z" level=warning msg="cleaning up after shim disconnected" id=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.184795637Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.185745696Z" level=info msg="ignoring event" container=5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.278984682Z" level=info msg="shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279055694Z" level=warning msg="cleaning up after shim disconnected" id=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.279067096Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:28 functional-877700 dockerd[4212]: time="2024-09-23T11:39:28.279977748Z" level=info msg="ignoring event" container=4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410094699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410458659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410613285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.410941440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568446569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568537384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568556887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.568657804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.670526533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676344305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676364709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.676451423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710366393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710454707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710469310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.710551424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732630814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732737932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.732870054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.733058486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997622111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997807842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997867052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:28 functional-877700 dockerd[4218]: time="2024-09-23T11:39:28.997990773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094861386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.094998109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095017513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.095210746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.331029320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333698174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.333727979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:34 functional-877700 dockerd[4218]: time="2024-09-23T11:39:34.334197760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:35 functional-877700 dockerd[4212]: time="2024-09-23T11:39:35.396427521Z" level=error msg="collecting stats for container /k8s_coredns_coredns-7c65d6cfc9-68rgs_kube-system_207034a8-50d8-43ec-b01c-2e0a29efdc66_1: invalid id: "
	Sep 23 11:39:35 functional-877700 dockerd[4212]: 2024/09/23 11:39:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Sep 23 11:39:37 functional-877700 dockerd[4212]: time="2024-09-23T11:39:37.337336680Z" level=info msg="ignoring event" container=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338143319Z" level=info msg="shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338506781Z" level=warning msg="cleaning up after shim disconnected" id=6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1 namespace=moby
	Sep 23 11:39:37 functional-877700 dockerd[4218]: time="2024-09-23T11:39:37.338593696Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.137955358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138247609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138416139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.138621075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198271806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198440435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.198515748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.199563031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.222966524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223195264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223281379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:40 functional-877700 dockerd[4218]: time="2024-09-23T11:39:40.223640342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981692372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981830996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981859501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:43 functional-877700 dockerd[4218]: time="2024-09-23T11:39:43.981957318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.084899403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085158149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.085423195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.087385540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.514583300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.522875456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523082393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:39:44 functional-877700 dockerd[4218]: time="2024-09-23T11:39:44.523369543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.063783746Z" level=info msg="Processing signal 'terminated'"
	Sep 23 11:42:57 functional-877700 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.269387669Z" level=info msg="ignoring event" container=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272857372Z" level=info msg="shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272914983Z" level=warning msg="cleaning up after shim disconnected" id=85217232ef302d10b541ec1898ad31dab6fcca277519ccf3170afe951efad9e3 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.272989998Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273759654Z" level=info msg="shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273807364Z" level=warning msg="cleaning up after shim disconnected" id=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.273816366Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274076818Z" level=info msg="ignoring event" container=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.274334971Z" level=info msg="ignoring event" container=7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281295780Z" level=info msg="shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281347090Z" level=warning msg="cleaning up after shim disconnected" id=1318e37c62eb1379206227f45bb9faa39a7139a76e7866f10fd566e5f1994a86 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.281429207Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.281785979Z" level=info msg="ignoring event" container=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288761591Z" level=info msg="shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288857311Z" level=warning msg="cleaning up after shim disconnected" id=9e117c8aa2e8f3ad49da0f4ab83077965b31cc39d3ca5b8a77ad9f93e691f091 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.288899919Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.289709683Z" level=info msg="ignoring event" container=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.289885119Z" level=info msg="shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290052853Z" level=warning msg="cleaning up after shim disconnected" id=9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.290077758Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.301171104Z" level=info msg="shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.301356441Z" level=info msg="ignoring event" container=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.303740624Z" level=info msg="ignoring event" container=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.304055988Z" level=info msg="ignoring event" container=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305020783Z" level=warning msg="cleaning up after shim disconnected" id=3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.305125504Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314046710Z" level=info msg="ignoring event" container=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.314082117Z" level=info msg="ignoring event" container=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.302995873Z" level=info msg="shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314416185Z" level=warning msg="cleaning up after shim disconnected" id=e1988c7f254dd238067ec3d72526598468adde4cef9ff8a6edc39086830d48f4 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.314428687Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317021012Z" level=info msg="shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317067522Z" level=warning msg="cleaning up after shim disconnected" id=f760ad6f83776f690bedbb4ce6091c1bdf7d2a0ea655d665ed1e501c4295ce03 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.317077724Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.303304235Z" level=info msg="shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323122848Z" level=warning msg="cleaning up after shim disconnected" id=873b07335931f6d580c611c0f883059ac8801dee6bcabe80252c3dd137260697 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.323280880Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331804205Z" level=info msg="shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331915728Z" level=warning msg="cleaning up after shim disconnected" id=e4559b860c3c91c70e1f44eb35c905386763cbbab4f69e8f41fcf73b81947065 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.331964638Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.351922178Z" level=info msg="ignoring event" container=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4212]: time="2024-09-23T11:42:57.352121318Z" level=info msg="ignoring event" container=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.352878171Z" level=info msg="shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353018500Z" level=warning msg="cleaning up after shim disconnected" id=2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353272851Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.353087514Z" level=info msg="shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366800790Z" level=warning msg="cleaning up after shim disconnected" id=7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1 namespace=moby
	Sep 23 11:42:57 functional-877700 dockerd[4218]: time="2024-09-23T11:42:57.366924715Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4212]: time="2024-09-23T11:43:02.178902577Z" level=info msg="ignoring event" container=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.180564113Z" level=info msg="shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.181657335Z" level=warning msg="cleaning up after shim disconnected" id=033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee namespace=moby
	Sep 23 11:43:02 functional-877700 dockerd[4218]: time="2024-09-23T11:43:02.182298464Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.148744009Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.187274688Z" level=info msg="ignoring event" container=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187333094Z" level=info msg="shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187370797Z" level=warning msg="cleaning up after shim disconnected" id=c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4218]: time="2024-09-23T11:43:07.187380798Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.255786085Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256042511Z" level=info msg="Daemon shutdown complete"
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256207728Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 11:43:07 functional-877700 dockerd[4212]: time="2024-09-23T11:43:07.256269334Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 11:43:08 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:43:08 functional-877700 systemd[1]: docker.service: Consumed 9.026s CPU time.
	Sep 23 11:43:08 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:43:08 functional-877700 dockerd[8665]: time="2024-09-23T11:43:08.304403480Z" level=info msg="Starting up"
	Sep 23 11:44:08 functional-877700 dockerd[8665]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 11:44:08 functional-877700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 11:44:08 functional-877700 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0923 11:44:08.210247    7852 out.go:270] * 
	W0923 11:44:08.211375    7852 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 11:44:08.216471    7852 out.go:201] 
	
	
	==> Docker <==
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID '7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1'"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="error getting RW layer size for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: time="2024-09-23T11:55:11Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d'"
	Sep 23 11:55:11 functional-877700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Sep 23 11:55:11 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 11:55:11 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 11:55:11 functional-877700 cri-dockerd[4491]: W0923 11:55:11.184207    4491 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-23T11:55:13Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +7.212864] kauditd_printk_skb: 88 callbacks suppressed
	[Sep23 11:38] kauditd_printk_skb: 10 callbacks suppressed
	[Sep23 11:39] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.560403] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.266225] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.259657] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +5.242721] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.039221] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.187714] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.177919] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.247363] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.078027] systemd-fstab-generator[4815]: Ignoring "noauto" option for root device
	[  +0.552186] kauditd_printk_skb: 169 callbacks suppressed
	[  +8.107083] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.047969] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.111564] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.007075] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.332556] systemd-fstab-generator[6642]: Ignoring "noauto" option for root device
	[  +0.159781] kauditd_printk_skb: 3 callbacks suppressed
	[Sep23 11:42] systemd-fstab-generator[8204]: Ignoring "noauto" option for root device
	[  +0.163607] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.439546] systemd-fstab-generator[8239]: Ignoring "noauto" option for root device
	[  +0.232654] systemd-fstab-generator[8251]: Ignoring "noauto" option for root device
	[  +0.268342] systemd-fstab-generator[8265]: Ignoring "noauto" option for root device
	[Sep23 11:43] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 11:56:11 up 20 min,  0 users,  load average: 0.04, 0.06, 0.09
	Linux functional-877700 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 23 11:56:09 functional-877700 kubelet[6119]: I0923 11:56:09.505951    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:56:09 functional-877700 kubelet[6119]: E0923 11:56:09.780014    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 13m13.022881102s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.288431    6119 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.289013    6119 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.289029    6119 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.288922    6119 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.289080    6119 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: I0923 11:56:11.289092    6119 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.288948    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.289108    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.288817    6119 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.289192    6119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.288894    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.289213    6119 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.290016    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.290054    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.290325    6119 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.429571    6119 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: I0923 11:56:11.429727    6119 setters.go:600] "Node became not ready" node="functional-877700" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-23T11:56:11Z","lastTransitionTime":"2024-09-23T11:56:11Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 13m14.672594225s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/run/docker.sock: read: connection reset by peer]"}
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.431279    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T11:56:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T11:56:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T11:56:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T11:56:11Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-23T11:56:11Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 13m14.672594225s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-877700\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700/status?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.433979    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.434781    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.435230    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.436656    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 11:56:11 functional-877700 kubelet[6119]: E0923 11:56:11.436674    6119 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:54:10.482058    8436 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:54:10.513166    8436 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:54:10.552621    8436 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:54:10.588485    8436 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:54:10.615459    8436 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:54:10.644467    8436 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:54:10.672459    8436 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:55:10.958504    8436 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.47/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_storage-provisioner%22%3Atrue%7D%7D": read unix @->/run/docker.sock: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: exit status 2 (10.1267736s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-877700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (174.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (486.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
E0923 11:55:29.651579    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://172.19.157.210:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": context deadline exceeded
functional_test_pvc_test.go:44: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
functional_test_pvc_test.go:44: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: exit status 2 (11.2926573s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:44: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:44: "functional-877700" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700: exit status 2 (10.9652636s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs -n 25: (3m33.3237183s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ssh        | functional-877700 ssh cat                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | /etc/hostname                                                                                       |                   |                   |         |                     |                     |
	| addons     | functional-877700 addons list                                                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	| addons     | functional-877700 addons list                                                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	| cp         | functional-877700 cp functional-877700:/home/docker/cp-test.txt                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2601736257\001\cp-test.txt |                   |                   |         |                     |                     |
	| service    | functional-877700 service list                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	| ssh        | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service    | functional-877700 service list                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	| cp         | functional-877700 cp                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| service    | functional-877700 service                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|            | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:54 UTC |
	|            | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| service    | functional-877700                                                                                   | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | service hello-node --url                                                                            |                   |                   |         |                     |                     |
	|            | --format={{.IP}}                                                                                    |                   |                   |         |                     |                     |
	| service    | functional-877700 service                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:54 UTC |                     |
	|            | hello-node --url                                                                                    |                   |                   |         |                     |                     |
	| start      | -p functional-877700                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|            | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|            | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| start      | -p functional-877700                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|            | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|            | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| start      | -p functional-877700 --dry-run                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	|            | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| dashboard  | --url --port 36195                                                                                  | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | -p functional-877700                                                                                |                   |                   |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo                                                                          | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| license    |                                                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/3844.pem                                                                             |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /usr/share/ca-certificates/3844.pem                                                                 |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/51391683.0                                                                           |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/38442.pem                                                                            |                   |                   |         |                     |                     |
	| docker-env | functional-877700 docker-env                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /usr/share/ca-certificates/38442.pem                                                                |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC |                     |
	|            | /etc/ssl/certs/3ec20f2e.0                                                                           |                   |                   |         |                     |                     |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:56:31
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:56:31.687379    8496 out.go:345] Setting OutFile to fd 1404 ...
	I0923 11:56:31.735460    8496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:31.735460    8496 out.go:358] Setting ErrFile to fd 1224...
	I0923 11:56:31.735460    8496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:31.757740    8496 out.go:352] Setting JSON to false
	I0923 11:56:31.759670    8496 start.go:129] hostinfo: {"hostname":"minikube5","uptime":488568,"bootTime":1726604023,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:56:31.760609    8496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:56:31.764607    8496 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:56:31.767599    8496 notify.go:220] Checking for updates...
	I0923 11:56:31.768640    8496 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:56:31.770811    8496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:56:31.772822    8496 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:56:31.774096    8496 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:56:31.776054    8496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:56:31.779125    8496 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:56:31.779983    8496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:56:36.446347    8496 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:56:36.449137    8496 start.go:297] selected driver: hyperv
	I0923 11:56:36.449137    8496 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:36.449137    8496 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:56:36.489530    8496 cni.go:84] Creating CNI manager for ""
	I0923 11:56:36.489530    8496 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:56:36.489530    8496 start.go:340] cluster config:
	{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:36.494735    8496 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee'"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3'"
	Sep 23 12:00:12 functional-877700 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1'"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d'"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32'"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8'"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f'"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1'"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1'"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="error getting RW layer size for container ID '7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:00:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:00:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024'"
	Sep 23 12:00:12 functional-877700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Sep 23 12:00:12 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 12:00:12 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-23T12:00:14Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.560403] systemd-fstab-generator[3779]: Ignoring "noauto" option for root device
	[  +0.266225] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.259657] systemd-fstab-generator[3806]: Ignoring "noauto" option for root device
	[  +5.242721] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.039221] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.187714] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.177919] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.247363] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.078027] systemd-fstab-generator[4815]: Ignoring "noauto" option for root device
	[  +0.552186] kauditd_printk_skb: 169 callbacks suppressed
	[  +8.107083] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.047969] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.111564] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.007075] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.332556] systemd-fstab-generator[6642]: Ignoring "noauto" option for root device
	[  +0.159781] kauditd_printk_skb: 3 callbacks suppressed
	[Sep23 11:42] systemd-fstab-generator[8204]: Ignoring "noauto" option for root device
	[  +0.163607] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.439546] systemd-fstab-generator[8239]: Ignoring "noauto" option for root device
	[  +0.232654] systemd-fstab-generator[8251]: Ignoring "noauto" option for root device
	[  +0.268342] systemd-fstab-generator[8265]: Ignoring "noauto" option for root device
	[Sep23 11:43] kauditd_printk_skb: 89 callbacks suppressed
	[Sep23 11:57] systemd-fstab-generator[13188]: Ignoring "noauto" option for root device
	[Sep23 11:58] systemd-fstab-generator[13478]: Ignoring "noauto" option for root device
	[  +0.132871] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:01:12 up 25 min,  0 users,  load average: 0.06, 0.08, 0.09
	Linux functional-877700 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 23 12:00:59 functional-877700 kubelet[6119]: I0923 12:00:59.504974    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:00:59 functional-877700 kubelet[6119]: E0923 12:00:59.835028    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 18m3.077885916s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 12:01:01 functional-877700 kubelet[6119]: E0923 12:01:01.885142    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 12:01:01 functional-877700 kubelet[6119]: E0923 12:01:01.952990    6119 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-877700.17f7dcd22c1212d6\": dial tcp 172.19.157.210:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-877700.17f7dcd22c1212d6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-877700,UID:d94a2590761a98c126cc01e55566a60c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.19.157.210:8441/readyz\": dial tcp 172.19.157.210:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-877700,},FirstTimestamp:2024-09-23 11:42:57.360499414 +0000 UTC m=+198.058688087,LastTimes
tamp:2024-09-23 11:43:00.361430138 +0000 UTC m=+201.059618911,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-877700,}"
	Sep 23 12:01:04 functional-877700 kubelet[6119]: E0923 12:01:04.836632    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 18m8.079490433s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 12:01:08 functional-877700 kubelet[6119]: E0923 12:01:08.887810    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 12:01:09 functional-877700 kubelet[6119]: I0923 12:01:09.504735    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:01:09 functional-877700 kubelet[6119]: I0923 12:01:09.506373    6119 status_manager.go:851] "Failed to get status for pod" podUID="d94a2590761a98c126cc01e55566a60c" pod="kube-system/kube-apiserver-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:01:09 functional-877700 kubelet[6119]: E0923 12:01:09.837135    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 18m13.079995677s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 12:01:11 functional-877700 kubelet[6119]: E0923 12:01:11.954916    6119 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-877700.17f7dcd22c1212d6\": dial tcp 172.19.157.210:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-877700.17f7dcd22c1212d6  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-877700,UID:d94a2590761a98c126cc01e55566a60c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.19.157.210:8441/readyz\": dial tcp 172.19.157.210:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-877700,},FirstTimestamp:2024-09-23 11:42:57.360499414 +0000 UTC m=+198.058688087,LastTimes
tamp:2024-09-23 11:43:00.361430138 +0000 UTC m=+201.059618911,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-877700,}"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.282034    6119 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.282092    6119 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.282107    6119 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.282130    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.282153    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.282277    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.282328    6119 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.283034    6119 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.283105    6119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.283130    6119 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.283184    6119 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: I0923 12:01:12.283199    6119 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.283325    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.283347    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 23 12:01:12 functional-877700 kubelet[6119]: E0923 12:01:12.284124    6119 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:58:11.504694    3616 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:58:11.532400    3616 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:58:11.581393    3616 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 11:59:11.749018    3616 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:00:11.832625    3616 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:00:11.871632    3616 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:00:11.909266    3616 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:00:11.936263    3616 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: exit status 2 (10.7394194s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-877700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (486.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (112.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-877700 replace --force -f testdata\mysql.yaml
functional_test.go:1793: (dbg) Non-zero exit: kubectl --context functional-877700 replace --force -f testdata\mysql.yaml: exit status 1 (4.2114253s)

                                                
                                                
** stderr ** 
	error when deleting "testdata\\mysql.yaml": Delete "https://172.19.157.210:8441/api/v1/namespaces/default/services/mysql": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
	error when deleting "testdata\\mysql.yaml": Delete "https://172.19.157.210:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1795: failed to kubectl replace mysql: args "kubectl --context functional-877700 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700: exit status 2 (10.3577064s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs -n 25: (1m27.3127348s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|------------|---------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                 Args                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|---------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| service    | functional-877700 service list        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | -o json                               |                   |                   |         |                     |                     |
	| cp         | functional-877700 cp                  | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | testdata\cp-test.txt                  |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt       |                   |                   |         |                     |                     |
	| service    | functional-877700 service             | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | --namespace=default --https           |                   |                   |         |                     |                     |
	|            | --url hello-node                      |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh -n              | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:54 UTC |
	|            | functional-877700 sudo cat            |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt       |                   |                   |         |                     |                     |
	| service    | functional-877700                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | service hello-node --url              |                   |                   |         |                     |                     |
	|            | --format={{.IP}}                      |                   |                   |         |                     |                     |
	| service    | functional-877700 service             | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:54 UTC |                     |
	|            | hello-node --url                      |                   |                   |         |                     |                     |
	| start      | -p functional-877700                  | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --dry-run --memory                    |                   |                   |         |                     |                     |
	|            | 250MB --alsologtostderr               |                   |                   |         |                     |                     |
	|            | --driver=hyperv                       |                   |                   |         |                     |                     |
	| start      | -p functional-877700                  | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --dry-run --memory                    |                   |                   |         |                     |                     |
	|            | 250MB --alsologtostderr               |                   |                   |         |                     |                     |
	|            | --driver=hyperv                       |                   |                   |         |                     |                     |
	| start      | -p functional-877700 --dry-run        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --alsologtostderr -v=1                |                   |                   |         |                     |                     |
	|            | --driver=hyperv                       |                   |                   |         |                     |                     |
	| dashboard  | --url --port 36195                    | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | -p functional-877700                  |                   |                   |         |                     |                     |
	|            | --alsologtostderr -v=1                |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | systemctl is-active crio              |                   |                   |         |                     |                     |
	| license    |                                       | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	| ssh        | functional-877700 ssh sudo cat        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/3844.pem               |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /usr/share/ca-certificates/3844.pem   |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/51391683.0             |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/38442.pem              |                   |                   |         |                     |                     |
	| docker-env | functional-877700 docker-env          | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC |                     |
	| ssh        | functional-877700 ssh sudo cat        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /usr/share/ca-certificates/38442.pem  |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0             |                   |                   |         |                     |                     |
	| image      | functional-877700 image load --daemon | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:58 UTC |
	|            | kicbase/echo-server:functional-877700 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                     |                   |                   |         |                     |                     |
	| image      | functional-877700 image ls            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:58 UTC | 23 Sep 24 11:59 UTC |
	| image      | functional-877700 image load --daemon | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:59 UTC | 23 Sep 24 12:00 UTC |
	|            | kicbase/echo-server:functional-877700 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                     |                   |                   |         |                     |                     |
	| image      | functional-877700 image ls            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:00 UTC | 23 Sep 24 12:01 UTC |
	| image      | functional-877700 image load --daemon | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC |                     |
	|            | kicbase/echo-server:functional-877700 |                   |                   |         |                     |                     |
	|            | --alsologtostderr                     |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|            | /etc/test/nested/copy/3844/hosts      |                   |                   |         |                     |                     |
	|------------|---------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:56:31
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:56:31.687379    8496 out.go:345] Setting OutFile to fd 1404 ...
	I0923 11:56:31.735460    8496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:31.735460    8496 out.go:358] Setting ErrFile to fd 1224...
	I0923 11:56:31.735460    8496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:31.757740    8496 out.go:352] Setting JSON to false
	I0923 11:56:31.759670    8496 start.go:129] hostinfo: {"hostname":"minikube5","uptime":488568,"bootTime":1726604023,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:56:31.760609    8496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:56:31.764607    8496 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:56:31.767599    8496 notify.go:220] Checking for updates...
	I0923 11:56:31.768640    8496 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:56:31.770811    8496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:56:31.772822    8496 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:56:31.774096    8496 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:56:31.776054    8496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:56:31.779125    8496 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:56:31.779983    8496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:56:36.446347    8496 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:56:36.449137    8496 start.go:297] selected driver: hyperv
	I0923 11:56:36.449137    8496 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:36.449137    8496 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:56:36.489530    8496 cni.go:84] Creating CNI manager for ""
	I0923 11:56:36.489530    8496 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:56:36.489530    8496 start.go:340] cluster config:
	{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:36.494735    8496 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7c21f80b1432a0742c93dbcff82ac016215129ee271e6523458e306dda8a2024'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 12:02:12 functional-877700 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 12:02:12 functional-877700 systemd[1]: Failed to start Docker Application Container Engine.
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5cdb3588e91654675a366c1c8ce20f1146fc7f6a87813681e78c61ccacc93848'"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="error getting RW layer size for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:02:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:02:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee'"
	Sep 23 12:02:12 functional-877700 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Sep 23 12:02:12 functional-877700 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 12:02:12 functional-877700 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-23T12:02:14Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +8.039221] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.187714] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.177919] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.247363] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.078027] systemd-fstab-generator[4815]: Ignoring "noauto" option for root device
	[  +0.552186] kauditd_printk_skb: 169 callbacks suppressed
	[  +8.107083] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.047969] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.111564] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.007075] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.332556] systemd-fstab-generator[6642]: Ignoring "noauto" option for root device
	[  +0.159781] kauditd_printk_skb: 3 callbacks suppressed
	[Sep23 11:42] systemd-fstab-generator[8204]: Ignoring "noauto" option for root device
	[  +0.163607] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.439546] systemd-fstab-generator[8239]: Ignoring "noauto" option for root device
	[  +0.232654] systemd-fstab-generator[8251]: Ignoring "noauto" option for root device
	[  +0.268342] systemd-fstab-generator[8265]: Ignoring "noauto" option for root device
	[Sep23 11:43] kauditd_printk_skb: 89 callbacks suppressed
	[Sep23 11:57] systemd-fstab-generator[13188]: Ignoring "noauto" option for root device
	[Sep23 11:58] systemd-fstab-generator[13478]: Ignoring "noauto" option for root device
	[  +0.132871] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 12:02] systemd-fstab-generator[14723]: Ignoring "noauto" option for root device
	[  +0.135766] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 12:03] systemd-fstab-generator[15060]: Ignoring "noauto" option for root device
	[  +0.134016] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:03:13 up 27 min,  0 users,  load average: 0.00, 0.05, 0.07
	Linux functional-877700 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 23 12:02:59 functional-877700 kubelet[6119]: I0923 12:02:59.504598    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:02:59 functional-877700 kubelet[6119]: E0923 12:02:59.859234    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 20m3.102095348s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 12:03:00 functional-877700 kubelet[6119]: E0923 12:03:00.047195    6119 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/etcd-functional-877700.17f7dcd24220a3fe\": dial tcp 172.19.157.210:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-877700.17f7dcd24220a3fe  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-877700,UID:1a2024253238820dd6dd104df30a6dbf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-877700,},FirstTimestamp:2024-09-23 11:42:57.73055283 +0000 UTC m=+198.428741603,LastTimestamp:2024-09-23 11:43:00.730261806 +0000 UTC m=+201.
428450479,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-877700,}"
	Sep 23 12:03:00 functional-877700 kubelet[6119]: E0923 12:03:00.930045    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 12:03:04 functional-877700 kubelet[6119]: E0923 12:03:04.860576    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 20m8.103440662s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 12:03:07 functional-877700 kubelet[6119]: E0923 12:03:07.932256    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 12:03:09 functional-877700 kubelet[6119]: I0923 12:03:09.502226    6119 status_manager.go:851] "Failed to get status for pod" podUID="d94a2590761a98c126cc01e55566a60c" pod="kube-system/kube-apiserver-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:03:09 functional-877700 kubelet[6119]: I0923 12:03:09.502809    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:03:09 functional-877700 kubelet[6119]: E0923 12:03:09.861852    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 20m13.104716592s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 12:03:10 functional-877700 kubelet[6119]: E0923 12:03:10.049887    6119 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/etcd-functional-877700.17f7dcd24220a3fe\": dial tcp 172.19.157.210:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-877700.17f7dcd24220a3fe  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-877700,UID:1a2024253238820dd6dd104df30a6dbf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-877700,},FirstTimestamp:2024-09-23 11:42:57.73055283 +0000 UTC m=+198.428741603,LastTimestamp:2024-09-23 11:43:00.730261806 +0000 UTC m=+201.
428450479,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-877700,}"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.875975    6119 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.878429    6119 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.878627    6119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.876034    6119 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.881209    6119 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.881246    6119 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.881256    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.881401    6119 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: I0923 12:03:12.881417    6119 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.882072    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.882107    6119 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.881441    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.883861    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.884984    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Sep 23 12:03:12 functional-877700 kubelet[6119]: E0923 12:03:12.885372    6119 kubelet.go:1446] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 12:02:12.288481    8260 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:02:12.319587    8260 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:02:12.364223    8260 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:02:12.393638    8260 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:02:12.422841    8260 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:02:12.448639    8260 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:02:12.480324    8260 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:02:12.515862    8260 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: exit status 2 (10.6059146s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-877700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (112.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (360.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-877700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-877700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (2.1487491s)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-877700 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-877700 -n functional-877700: exit status 2 (10.2453273s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs -n 25: (5m38.0787201s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|  Command   |                                                Args                                                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| addons     | functional-877700 addons list                                                                       | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	| cp         | functional-877700 cp functional-877700:/home/docker/cp-test.txt                                     | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2601736257\001\cp-test.txt |                   |                   |         |                     |                     |
	| service    | functional-877700 service list                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	| ssh        | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /home/docker/cp-test.txt                                                                            |                   |                   |         |                     |                     |
	| service    | functional-877700 service list                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | -o json                                                                                             |                   |                   |         |                     |                     |
	| cp         | functional-877700 cp                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	|            | testdata\cp-test.txt                                                                                |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| service    | functional-877700 service                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | --namespace=default --https                                                                         |                   |                   |         |                     |                     |
	|            | --url hello-node                                                                                    |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh -n                                                                            | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:54 UTC |
	|            | functional-877700 sudo cat                                                                          |                   |                   |         |                     |                     |
	|            | /tmp/does/not/exist/cp-test.txt                                                                     |                   |                   |         |                     |                     |
	| service    | functional-877700                                                                                   | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|            | service hello-node --url                                                                            |                   |                   |         |                     |                     |
	|            | --format={{.IP}}                                                                                    |                   |                   |         |                     |                     |
	| service    | functional-877700 service                                                                           | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:54 UTC |                     |
	|            | hello-node --url                                                                                    |                   |                   |         |                     |                     |
	| start      | -p functional-877700                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|            | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|            | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| start      | -p functional-877700                                                                                | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --dry-run --memory                                                                                  |                   |                   |         |                     |                     |
	|            | 250MB --alsologtostderr                                                                             |                   |                   |         |                     |                     |
	|            | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| start      | -p functional-877700 --dry-run                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	|            | --driver=hyperv                                                                                     |                   |                   |         |                     |                     |
	| dashboard  | --url --port 36195                                                                                  | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | -p functional-877700                                                                                |                   |                   |         |                     |                     |
	|            | --alsologtostderr -v=1                                                                              |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo                                                                          | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	|            | systemctl is-active crio                                                                            |                   |                   |         |                     |                     |
	| license    |                                                                                                     | minikube          | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/3844.pem                                                                             |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /usr/share/ca-certificates/3844.pem                                                                 |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/51391683.0                                                                           |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/38442.pem                                                                            |                   |                   |         |                     |                     |
	| docker-env | functional-877700 docker-env                                                                        | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /usr/share/ca-certificates/38442.pem                                                                |                   |                   |         |                     |                     |
	| ssh        | functional-877700 ssh sudo cat                                                                      | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|            | /etc/ssl/certs/3ec20f2e.0                                                                           |                   |                   |         |                     |                     |
	| image      | functional-877700 image load --daemon                                                               | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:58 UTC |
	|            | kicbase/echo-server:functional-877700                                                               |                   |                   |         |                     |                     |
	|            | --alsologtostderr                                                                                   |                   |                   |         |                     |                     |
	| image      | functional-877700 image ls                                                                          | functional-877700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:58 UTC |                     |
	|------------|-----------------------------------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:56:31
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:56:31.687379    8496 out.go:345] Setting OutFile to fd 1404 ...
	I0923 11:56:31.735460    8496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:31.735460    8496 out.go:358] Setting ErrFile to fd 1224...
	I0923 11:56:31.735460    8496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:31.757740    8496 out.go:352] Setting JSON to false
	I0923 11:56:31.759670    8496 start.go:129] hostinfo: {"hostname":"minikube5","uptime":488568,"bootTime":1726604023,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:56:31.760609    8496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:56:31.764607    8496 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:56:31.767599    8496 notify.go:220] Checking for updates...
	I0923 11:56:31.768640    8496 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:56:31.770811    8496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:56:31.772822    8496 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:56:31.774096    8496 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:56:31.776054    8496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:56:31.779125    8496 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:56:31.779983    8496 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:56:36.446347    8496 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:56:36.449137    8496 start.go:297] selected driver: hyperv
	I0923 11:56:36.449137    8496 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:36.449137    8496 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:56:36.489530    8496 cni.go:84] Creating CNI manager for ""
	I0923 11:56:36.489530    8496 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:56:36.489530    8496 start.go:340] cluster config:
	{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:36.494735    8496 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86498544573d66d4e830fddbf76f2dd7f5db8b884c00c553b011e8130bea98a8'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9b83feae40112db1d22e1401847666ecf4cb08219c8fa097e6dc81dec17f51a3'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '033c968960434796bf2ee5261cc75c865d4c480dca322491914d26974388e3ee'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '9203c2cf5f2887de3dfa94c65c7a349c1a6f89e07a427cca6a4fba33315394a8'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7d20cc069f125da5726ff8f27df3c8e1a6a0afd986cc47e1db433eed474cfee1'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2840d1510bf70f605de3d051c0f2098c2f57a073c30a9358ed757672d3b074a1'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4cd7dfae51eaeca8d6401e71a5673b887555413cf54f90ce547b2f3d5d464cd1'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '3db622a1a6cec95cd791b8c26d7147a406c91ccb51551948e3daa4e3067a045d'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '7e690e0c114799b65ea6cc9f42dc051c767a7b847564a3fff6f7c8f5aa272f50'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'f0fdd3b0500aa405a1a4ca084848d365c10ac900c38f85b1aeebfd2d5dd78b32'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c219e269d74e8b6fc30f21e806312db8c576c045a2dbb56d1badef1c6ae6244f'"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="error getting RW layer size for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:03:12 functional-877700 cri-dockerd[4491]: time="2024-09-23T12:03:12Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6c5cbfe07adf10e636d4cac27cc2e479c881a7beb0f5c95e3a1df140c6c718c1'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-09-23T12:03:15Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +8.039221] systemd-fstab-generator[4444]: Ignoring "noauto" option for root device
	[  +0.187714] systemd-fstab-generator[4456]: Ignoring "noauto" option for root device
	[  +0.177919] systemd-fstab-generator[4468]: Ignoring "noauto" option for root device
	[  +0.247363] systemd-fstab-generator[4483]: Ignoring "noauto" option for root device
	[  +1.078027] systemd-fstab-generator[4815]: Ignoring "noauto" option for root device
	[  +0.552186] kauditd_printk_skb: 169 callbacks suppressed
	[  +8.107083] kauditd_printk_skb: 67 callbacks suppressed
	[  +5.047969] systemd-fstab-generator[6111]: Ignoring "noauto" option for root device
	[  +0.111564] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.007075] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.332556] systemd-fstab-generator[6642]: Ignoring "noauto" option for root device
	[  +0.159781] kauditd_printk_skb: 3 callbacks suppressed
	[Sep23 11:42] systemd-fstab-generator[8204]: Ignoring "noauto" option for root device
	[  +0.163607] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.439546] systemd-fstab-generator[8239]: Ignoring "noauto" option for root device
	[  +0.232654] systemd-fstab-generator[8251]: Ignoring "noauto" option for root device
	[  +0.268342] systemd-fstab-generator[8265]: Ignoring "noauto" option for root device
	[Sep23 11:43] kauditd_printk_skb: 89 callbacks suppressed
	[Sep23 11:57] systemd-fstab-generator[13188]: Ignoring "noauto" option for root device
	[Sep23 11:58] systemd-fstab-generator[13478]: Ignoring "noauto" option for root device
	[  +0.132871] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 12:02] systemd-fstab-generator[14723]: Ignoring "noauto" option for root device
	[  +0.135766] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 12:03] systemd-fstab-generator[15060]: Ignoring "noauto" option for root device
	[  +0.134016] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:04:13 up 28 min,  0 users,  load average: 0.00, 0.04, 0.07
	Linux functional-877700 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 23 12:04:06 functional-877700 kubelet[6119]: E0923 12:04:06.015398    6119 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.19.157.210:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-functional-877700.17f7dcd308c85583  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-functional-877700,UID:75b601e091011beb813ec9f60a3f53d5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/healthz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-877700,},FirstTimestamp:2024-09-23 11:43:01.063431555 +0000 UTC m=+201.761620328,LastTimestamp:2024-09-23 11:43:01.063431555 +0000 UTC m=+201.7616203
28,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-877700,}"
	Sep 23 12:04:09 functional-877700 kubelet[6119]: I0923 12:04:09.504224    6119 status_manager.go:851] "Failed to get status for pod" podUID="d94a2590761a98c126cc01e55566a60c" pod="kube-system/kube-apiserver-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:04:09 functional-877700 kubelet[6119]: I0923 12:04:09.505316    6119 status_manager.go:851] "Failed to get status for pod" podUID="1a2024253238820dd6dd104df30a6dbf" pod="kube-system/etcd-functional-877700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-877700\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:04:09 functional-877700 kubelet[6119]: E0923 12:04:09.873639    6119 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 21m13.116512304s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Sep 23 12:04:10 functional-877700 kubelet[6119]: E0923 12:04:10.953708    6119 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused" interval="7s"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.130266    6119 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.133340    6119 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.133614    6119 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.145583    6119 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.145800    6119 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.145875    6119 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.145904    6119 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: I0923 12:04:13.145917    6119 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.146052    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.146080    6119 kuberuntime_container.go:507] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.146246    6119 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: I0923 12:04:13.146351    6119 setters.go:600] "Node became not ready" node="functional-877700" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-09-23T12:04:13Z","lastTransitionTime":"2024-09-23T12:04:13Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 21m16.389185548s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/run/docker.sock: read: connection reset by peer]"}
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.146971    6119 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.147008    6119 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.149978    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T12:04:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T12:04:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T12:04:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-23T12:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2024-09-23T12:04:13Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 21m16.389185548s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-877700\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700/status?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.155062    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.155724    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.156324    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.158891    6119 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"functional-877700\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-877700?timeout=10s\": dial tcp 172.19.157.210:8441: connect: connection refused"
	Sep 23 12:04:13 functional-877700 kubelet[6119]: E0923 12:04:13.158923    6119 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 12:00:11.949254     576 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.47/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kube-apiserver%22%3Atrue%7D%7D": read unix @->/run/docker.sock: read: connection reset by peer
	E0923 12:01:12.049747     576 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:01:12.079869     576 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:01:12.113118     576 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:01:12.142112     576 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:01:12.168113     576 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0923 12:02:12.547515     576 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.47/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kindnet%22%3Atrue%7D%7D": read unix @->/run/docker.sock: read: connection reset by peer
	E0923 12:03:12.643427     576 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-877700 -n functional-877700: exit status 2 (10.4378012s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-877700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (360.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I0923 11:53:14.832765    7148 out.go:345] Setting OutFile to fd 1116 ...
I0923 11:53:14.972559    7148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:53:14.972598    7148 out.go:358] Setting ErrFile to fd 1120...
I0923 11:53:14.972681    7148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 11:53:14.995900    7148 mustload.go:65] Loading cluster: functional-877700
I0923 11:53:14.997640    7148 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 11:53:14.998415    7148 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 11:53:17.794590    7148 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 11:53:17.794590    7148 main.go:141] libmachine: [stderr =====>] : 
I0923 11:53:17.794590    7148 host.go:66] Checking if "functional-877700" exists ...
I0923 11:53:17.795589    7148 api_server.go:166] Checking apiserver status ...
I0923 11:53:17.807605    7148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0923 11:53:17.808624    7148 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 11:53:19.982930    7148 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 11:53:19.982930    7148 main.go:141] libmachine: [stderr =====>] : 
I0923 11:53:19.982930    7148 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
I0923 11:53:22.525509    7148 main.go:141] libmachine: [stdout =====>] : 172.19.157.210

                                                
                                                
I0923 11:53:22.525509    7148 main.go:141] libmachine: [stderr =====>] : 
I0923 11:53:22.525962    7148 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
I0923 11:53:22.641445    7148 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.8324953s)
W0923 11:53:22.641445    7148 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0923 11:53:22.644418    7148 out.go:177] * The control-plane node functional-877700 apiserver is not running: (state=Stopped)
I0923 11:53:22.647420    7148 out.go:177]   To start a cluster, run: "minikube start -p functional-877700"

                                                
                                                
stdout: * The control-plane node functional-877700 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-877700"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 13260: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (7.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-877700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-877700 apply -f testdata\testsvc.yaml: exit status 1 (4.2019923s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://172.19.157.210:8441/openapi/v2?timeout=32s": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-877700 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-877700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1439: (dbg) Non-zero exit: kubectl --context functional-877700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (2.1434654s)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://172.19.157.210:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 172.19.157.210:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-877700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (6.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 service list: exit status 103 (6.6911512s)

                                                
                                                
-- stdout --
	* The control-plane node functional-877700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-877700"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-877700 service list" : exit status 103
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-877700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-877700\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (6.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (6.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 service list -o json: exit status 103 (6.652613s)

                                                
                                                
-- stdout --
	* The control-plane node functional-877700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-877700"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-877700 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (6.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (6.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 service --namespace=default --https --url hello-node: exit status 103 (6.6348015s)

                                                
                                                
-- stdout --
	* The control-plane node functional-877700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-877700"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-877700 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (6.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (6.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 service hello-node --url --format={{.IP}}: exit status 103 (6.4438473s)

                                                
                                                
-- stdout --
	* The control-plane node functional-877700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-877700"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-877700 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1548: "* The control-plane node functional-877700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-877700\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (6.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (6.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 service hello-node --url: exit status 103 (6.4787797s)

                                                
                                                
-- stdout --
	* The control-plane node functional-877700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-877700"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-877700 service hello-node --url": exit status 103
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-877700 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-877700"
functional_test.go:1569: failed to parse "* The control-plane node functional-877700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-877700\"": parse "* The control-plane node functional-877700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-877700\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (6.48s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (470.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-877700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-877700"
functional_test.go:499: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-877700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-877700": exit status 1 (7m50.5068623s)

                                                
                                                
** stderr ** 
	X Exiting due to MK_DOCKER_SCRIPT: Error generating set output: write /dev/stdout: The pipe is being closed.
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_docker-env_1f5562ba2f20b73b531869f0520020e4bb661a3b_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	E0923 12:05:13.121380    8624 out.go:221] Fprintf failed: write /dev/stdout: The pipe is being closed.

                                                
                                                
** /stderr **
functional_test.go:502: failed to run the command by deadline. exceeded timeout. powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-877700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-877700"
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/powershell (470.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (60.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls --format short --alsologtostderr: (1m0.0888698s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-877700 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-877700 image ls --format short --alsologtostderr:
I0923 12:08:14.118063    8456 out.go:345] Setting OutFile to fd 1280 ...
I0923 12:08:14.219087    8456 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:08:14.219087    8456 out.go:358] Setting ErrFile to fd 1412...
I0923 12:08:14.219087    8456 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:08:14.232074    8456 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:08:14.232074    8456 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:08:14.233070    8456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:08:16.693139    8456 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:08:16.693427    8456 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:16.708661    8456 ssh_runner.go:195] Run: systemctl --version
I0923 12:08:16.708661    8456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:08:19.027887    8456 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:08:19.027946    8456 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:19.027946    8456 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
I0923 12:08:21.471310    8456 main.go:141] libmachine: [stdout =====>] : 172.19.157.210

                                                
                                                
I0923 12:08:21.472308    8456 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:21.472559    8456 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
I0923 12:08:21.587446    8456 ssh_runner.go:235] Completed: systemctl --version: (4.8784556s)
I0923 12:08:21.597166    8456 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0923 12:09:14.048596    8456 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.4478638s)
W0923 12:09:14.048743    8456 cache_images.go:734] Failed to list images for profile functional-877700 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (60.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (60.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls --format table --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls --format table --alsologtostderr: (1m0.2293144s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-877700 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-877700 image ls --format table --alsologtostderr:
I0923 12:09:14.165004    7536 out.go:345] Setting OutFile to fd 1476 ...
I0923 12:09:14.229071    7536 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:14.229071    7536 out.go:358] Setting ErrFile to fd 1508...
I0923 12:09:14.229071    7536 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:09:14.241059    7536 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:09:14.242065    7536 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:09:14.242065    7536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:09:16.148531    7536 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:09:16.148531    7536 main.go:141] libmachine: [stderr =====>] : 
I0923 12:09:16.158199    7536 ssh_runner.go:195] Run: systemctl --version
I0923 12:09:16.158199    7536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:09:18.123830    7536 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:09:18.123830    7536 main.go:141] libmachine: [stderr =====>] : 
I0923 12:09:18.123830    7536 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
I0923 12:09:20.475172    7536 main.go:141] libmachine: [stdout =====>] : 172.19.157.210

                                                
                                                
I0923 12:09:20.475172    7536 main.go:141] libmachine: [stderr =====>] : 
I0923 12:09:20.475333    7536 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
I0923 12:09:20.580608    7536 ssh_runner.go:235] Completed: systemctl --version: (4.4221109s)
I0923 12:09:20.587741    7536 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0923 12:10:14.274563    7536 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (53.6831986s)
W0923 12:10:14.274563    7536 cache_images.go:734] Failed to list images for profile functional-877700 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (60.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (60.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls --format json --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls --format json --alsologtostderr: (1m0.0724801s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-877700 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-877700 image ls --format json --alsologtostderr:
I0923 12:08:14.118063   13108 out.go:345] Setting OutFile to fd 1224 ...
I0923 12:08:14.219087   13108 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:08:14.219087   13108 out.go:358] Setting ErrFile to fd 1036...
I0923 12:08:14.219087   13108 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:08:14.245076   13108 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:08:14.245076   13108 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:08:14.246075   13108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:08:16.627081   13108 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:08:16.627134   13108 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:16.645381   13108 ssh_runner.go:195] Run: systemctl --version
I0923 12:08:16.645381   13108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:08:18.915868   13108 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:08:18.915868   13108 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:18.915868   13108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
I0923 12:08:21.414701   13108 main.go:141] libmachine: [stdout =====>] : 172.19.157.210

                                                
                                                
I0923 12:08:21.414701   13108 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:21.415531   13108 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
I0923 12:08:21.512342   13108 ssh_runner.go:235] Completed: systemctl --version: (4.8666317s)
I0923 12:08:21.523488   13108 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0923 12:09:14.046259   13108 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.5191545s)
W0923 12:09:14.046574   13108 cache_images.go:734] Failed to list images for profile functional-877700 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (60.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (60.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls --format yaml --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls --format yaml --alsologtostderr: (1m0.0484478s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-877700 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-877700 image ls --format yaml --alsologtostderr:
I0923 12:08:14.114075    7620 out.go:345] Setting OutFile to fd 1212 ...
I0923 12:08:14.197071    7620 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:08:14.197071    7620 out.go:358] Setting ErrFile to fd 1128...
I0923 12:08:14.197071    7620 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:08:14.209067    7620 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:08:14.209067    7620 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:08:14.210069    7620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:08:16.585487    7620 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:08:16.585654    7620 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:16.597149    7620 ssh_runner.go:195] Run: systemctl --version
I0923 12:08:16.597232    7620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:08:18.936414    7620 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:08:18.936641    7620 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:18.936706    7620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
I0923 12:08:21.381411    7620 main.go:141] libmachine: [stdout =====>] : 172.19.157.210

                                                
                                                
I0923 12:08:21.381545    7620 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:21.381545    7620 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
I0923 12:08:21.476766    7620 ssh_runner.go:235] Completed: systemctl --version: (4.8792384s)
I0923 12:08:21.483996    7620 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0923 12:09:14.046805    7620 ssh_runner.go:235] Completed: docker images --no-trunc --format "{{json .}}": (52.559175s)
W0923 12:09:14.046953    7620 cache_images.go:734] Failed to list images for profile functional-877700 docker images: docker images --no-trunc --format "{{json .}}": Process exited with status 1
stdout:

                                                
                                                
stderr:
error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (60.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (120.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 ssh pgrep buildkitd: exit status 1 (9.3140128s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image build -t localhost/my-image:functional-877700 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image build -t localhost/my-image:functional-877700 testdata\build --alsologtostderr: (50.8031124s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-877700 image build -t localhost/my-image:functional-877700 testdata\build --alsologtostderr:
I0923 12:08:23.418163    5136 out.go:345] Setting OutFile to fd 1040 ...
I0923 12:08:23.485509    5136 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:08:23.485575    5136 out.go:358] Setting ErrFile to fd 1044...
I0923 12:08:23.485575    5136 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:08:23.501926    5136 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:08:23.517987    5136 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:08:23.518602    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:08:25.351524    5136 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:08:25.351683    5136 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:25.360878    5136 ssh_runner.go:195] Run: systemctl --version
I0923 12:08:25.360878    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-877700 ).state
I0923 12:08:27.212402    5136 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:08:27.212478    5136 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:27.212618    5136 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-877700 ).networkadapters[0]).ipaddresses[0]
I0923 12:08:29.420559    5136 main.go:141] libmachine: [stdout =====>] : 172.19.157.210

                                                
                                                
I0923 12:08:29.420559    5136 main.go:141] libmachine: [stderr =====>] : 
I0923 12:08:29.421419    5136 sshutil.go:53] new ssh client: &{IP:172.19.157.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-877700\id_rsa Username:docker}
I0923 12:08:29.509535    5136 ssh_runner.go:235] Completed: systemctl --version: (4.1483768s)
I0923 12:08:29.509535    5136 build_images.go:161] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.2583549067.tar
I0923 12:08:29.519300    5136 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 12:08:29.545026    5136 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2583549067.tar
I0923 12:08:29.551105    5136 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2583549067.tar: stat -c "%s %y" /var/lib/minikube/build/build.2583549067.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2583549067.tar': No such file or directory
I0923 12:08:29.551105    5136 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.2583549067.tar --> /var/lib/minikube/build/build.2583549067.tar (3072 bytes)
I0923 12:08:29.606291    5136 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2583549067
I0923 12:08:29.641323    5136 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2583549067 -xf /var/lib/minikube/build/build.2583549067.tar
I0923 12:08:29.659061    5136 docker.go:360] Building image: /var/lib/minikube/build/build.2583549067
I0923 12:08:29.666426    5136 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-877700 /var/lib/minikube/build/build.2583549067
ERROR: error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
I0923 12:09:14.056140    5136 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-877700 /var/lib/minikube/build/build.2583549067: (44.3861764s)
W0923 12:09:14.056287    5136 build_images.go:125] Failed to build image for profile functional-877700. make sure the profile is running. Docker build /var/lib/minikube/build/build.2583549067.tar: buildimage docker: docker build -t localhost/my-image:functional-877700 /var/lib/minikube/build/build.2583549067: Process exited with status 1
stdout:

                                                
                                                
stderr:
ERROR: error during connect: Head "http://%2Fvar%2Frun%2Fdocker.sock/_ping": read unix @->/var/run/docker.sock: read: connection reset by peer
I0923 12:09:14.056316    5136 build_images.go:133] succeeded building to: 
I0923 12:09:14.056316    5136 build_images.go:134] failed building to: functional-877700
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls: (1m0.1829577s)
functional_test.go:446: expected "localhost/my-image:functional-877700" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (120.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (86.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image load --daemon kicbase/echo-server:functional-877700 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image load --daemon kicbase/echo-server:functional-877700 --alsologtostderr: (26.5456455s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls: (1m0.2382999s)
functional_test.go:446: expected "kicbase/echo-server:functional-877700" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (86.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image load --daemon kicbase/echo-server:functional-877700 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image load --daemon kicbase/echo-server:functional-877700 --alsologtostderr: (1m0.0956351s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls
E0923 12:00:29.672039    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls: (1m0.1994327s)
functional_test.go:446: expected "kicbase/echo-server:functional-877700" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (120.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-877700
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image load --daemon kicbase/echo-server:functional-877700 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image load --daemon kicbase/echo-server:functional-877700 --alsologtostderr: (59.4128379s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls: (1m0.3522444s)
functional_test.go:446: expected "kicbase/echo-server:functional-877700" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (120.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (120.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image save kicbase/echo-server:functional-877700 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image save kicbase/echo-server:functional-877700 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (2m0.3616215s)
functional_test.go:386: expected "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (120.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: exit status 80 (326.7924ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:07:13.665511    7528 out.go:345] Setting OutFile to fd 1196 ...
	I0923 12:07:13.737802    7528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:07:13.737871    7528 out.go:358] Setting ErrFile to fd 988...
	I0923 12:07:13.737871    7528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:07:13.750113    7528 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:07:13.751129    7528 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	I0923 12:07:13.860789    7528 cache.go:107] acquiring lock: {Name:mkab03a876aa3cd2aa4cbc5169fcc047637169c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:07:13.863810    7528 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" took 112.5786ms
	I0923 12:07:13.867407    7528 out.go:201] 
	W0923 12:07:13.870027    7528 out.go:270] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	W0923 12:07:13.870027    7528 out.go:270] * 
	* 
	W0923 12:07:13.878411    7528 out.go:293] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_9d334fddf764ec6d7b0708a9057c4c5712610888_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_9d334fddf764ec6d7b0708a9057c4c5712610888_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:07:13.881618    7528 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:411: loading image into minikube from file: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:07:13.665511    7528 out.go:345] Setting OutFile to fd 1196 ...
	I0923 12:07:13.737802    7528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:07:13.737871    7528 out.go:358] Setting ErrFile to fd 988...
	I0923 12:07:13.737871    7528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:07:13.750113    7528 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:07:13.751129    7528 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\images\amd64\C_\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	I0923 12:07:13.860789    7528 cache.go:107] acquiring lock: {Name:mkab03a876aa3cd2aa4cbc5169fcc047637169c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:07:13.863810    7528 cache.go:96] cache image "C:\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" -> "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar" took 112.5786ms
	I0923 12:07:13.867407    7528 out.go:201] 
	W0923 12:07:13.870027    7528 out.go:270] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\cache\\images\\amd64\\C_\\jenkins\\workspace\\Hyper-V_Windows_integration\\echo-server-save.tar": parsing image ref name for C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar: could not parse reference: C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar
	W0923 12:07:13.870027    7528 out.go:270] * 
	* 
	W0923 12:07:13.878411    7528 out.go:293] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_9d334fddf764ec6d7b0708a9057c4c5712610888_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_image_9d334fddf764ec6d7b0708a9057c4c5712610888_0.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:07:13.881618    7528 out.go:201] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (64.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-45cpz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-45cpz -- sh -c "ping -c 1 172.19.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-45cpz -- sh -c "ping -c 1 172.19.144.1": exit status 1 (10.4394907s)

                                                
                                                
-- stdout --
	PING 172.19.144.1 (172.19.144.1): 56 data bytes
	
	--- 172.19.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.144.1) from pod (busybox-7dff88458-45cpz): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-rjg7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-rjg7r -- sh -c "ping -c 1 172.19.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-rjg7r -- sh -c "ping -c 1 172.19.144.1": exit status 1 (10.4647809s)

                                                
                                                
-- stdout --
	PING 172.19.144.1 (172.19.144.1): 56 data bytes
	
	--- 172.19.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.144.1) from pod (busybox-7dff88458-rjg7r): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-x4chx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-x4chx -- sh -c "ping -c 1 172.19.144.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-x4chx -- sh -c "ping -c 1 172.19.144.1": exit status 1 (10.4572109s)

                                                
                                                
-- stdout --
	PING 172.19.144.1 (172.19.144.1): 56 data bytes
	
	--- 172.19.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.144.1) from pod (busybox-7dff88458-x4chx): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-565300 -n ha-565300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-565300 -n ha-565300: (11.11955s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 logs -n 25: (7.9489811s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:22 UTC | 23 Sep 24 12:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:22 UTC | 23 Sep 24 12:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:22 UTC | 23 Sep 24 12:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:22 UTC | 23 Sep 24 12:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:22 UTC | 23 Sep 24 12:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:22 UTC | 23 Sep 24 12:22 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-45cpz --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-rjg7r --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-x4chx --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-45cpz --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-rjg7r --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-x4chx --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-45cpz -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-rjg7r -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-x4chx -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- get pods -o          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-45cpz              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC |                     |
	|         | busybox-7dff88458-45cpz -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.144.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-rjg7r              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC |                     |
	|         | busybox-7dff88458-rjg7r -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.144.1            |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC | 23 Sep 24 12:23 UTC |
	|         | busybox-7dff88458-x4chx              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-565300 -- exec                 | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:23 UTC |                     |
	|         | busybox-7dff88458-x4chx -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.144.1            |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:11:33
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:11:32.978079    3340 out.go:345] Setting OutFile to fd 1532 ...
	I0923 12:11:33.023194    3340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:11:33.023194    3340 out.go:358] Setting ErrFile to fd 1356...
	I0923 12:11:33.023194    3340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:11:33.040255    3340 out.go:352] Setting JSON to false
	I0923 12:11:33.042224    3340 start.go:129] hostinfo: {"hostname":"minikube5","uptime":489469,"bootTime":1726604023,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 12:11:33.042224    3340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 12:11:33.047289    3340 out.go:177] * [ha-565300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 12:11:33.050785    3340 notify.go:220] Checking for updates...
	I0923 12:11:33.050785    3340 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:11:33.053483    3340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:11:33.056631    3340 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 12:11:33.058975    3340 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:11:33.061367    3340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:11:33.064125    3340 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:11:37.823204    3340 out.go:177] * Using the hyperv driver based on user configuration
	I0923 12:11:37.827034    3340 start.go:297] selected driver: hyperv
	I0923 12:11:37.827034    3340 start.go:901] validating driver "hyperv" against <nil>
	I0923 12:11:37.827034    3340 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:11:37.868172    3340 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:11:37.869018    3340 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:11:37.869018    3340 cni.go:84] Creating CNI manager for ""
	I0923 12:11:37.869018    3340 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 12:11:37.869018    3340 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 12:11:37.870017    3340 start.go:340] cluster config:
	{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:11:37.870017    3340 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:11:37.874629    3340 out.go:177] * Starting "ha-565300" primary control-plane node in "ha-565300" cluster
	I0923 12:11:37.876732    3340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:11:37.877730    3340 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:11:37.877730    3340 cache.go:56] Caching tarball of preloaded images
	I0923 12:11:37.878097    3340 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:11:37.878097    3340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:11:37.878742    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:11:37.879180    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json: {Name:mkc75814a813493ad95a286b802d19c495eecb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:11:37.880387    3340 start.go:360] acquireMachinesLock for ha-565300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:11:37.880387    3340 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-565300"
	I0923 12:11:37.880614    3340 start.go:93] Provisioning new machine with config: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:11:37.880819    3340 start.go:125] createHost starting for "" (driver="hyperv")
	I0923 12:11:37.883223    3340 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:11:37.883801    3340 start.go:159] libmachine.API.Create for "ha-565300" (driver="hyperv")
	I0923 12:11:37.883801    3340 client.go:168] LocalClient.Create starting
	I0923 12:11:37.883801    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:11:37.884980    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 12:11:39.701346    3340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 12:11:39.701346    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:39.701627    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 12:11:41.206711    3340 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 12:11:41.206711    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:41.206844    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:11:42.563585    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:11:42.563585    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:42.564162    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:11:45.601239    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:11:45.601239    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:45.604124    3340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:11:45.987938    3340 main.go:141] libmachine: Creating SSH key...
	I0923 12:11:46.263141    3340 main.go:141] libmachine: Creating VM...
	I0923 12:11:46.263141    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:11:48.690486    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:11:48.690486    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:48.691214    3340 main.go:141] libmachine: Using switch "Default Switch"
	I0923 12:11:48.691281    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:11:50.199853    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:11:50.199853    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:50.199853    3340 main.go:141] libmachine: Creating VHD
	I0923 12:11:50.200263    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 12:11:53.525044    3340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CCCE98BF-FC9E-4970-B4A7-8EDBBFA23647
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 12:11:53.525456    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:53.525456    3340 main.go:141] libmachine: Writing magic tar header
	I0923 12:11:53.525456    3340 main.go:141] libmachine: Writing SSH key tar header
	I0923 12:11:53.534386    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 12:11:56.390814    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:11:56.390814    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:56.391947    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\disk.vhd' -SizeBytes 20000MB
	I0923 12:11:58.708762    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:11:58.709454    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:58.709534    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-565300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0923 12:12:02.027536    3340 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-565300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 12:12:02.027536    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:02.027636    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-565300 -DynamicMemoryEnabled $false
	I0923 12:12:03.974860    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:03.974860    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:03.974992    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-565300 -Count 2
	I0923 12:12:05.846962    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:05.847410    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:05.847485    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-565300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\boot2docker.iso'
	I0923 12:12:08.127192    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:08.127192    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:08.127778    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-565300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\disk.vhd'
	I0923 12:12:10.430666    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:10.431391    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:10.431391    3340 main.go:141] libmachine: Starting VM...
	I0923 12:12:10.431391    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-565300
	I0923 12:12:13.161930    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:13.161930    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:13.161930    3340 main.go:141] libmachine: Waiting for host to start...
	I0923 12:12:13.162911    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:15.206016    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:15.206016    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:15.206016    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:17.427988    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:17.427988    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:18.428717    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:20.362345    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:20.362345    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:20.362345    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:22.566265    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:22.566265    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:23.567445    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:25.471861    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:25.471895    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:25.472091    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:27.630283    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:27.630469    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:28.631003    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:30.586422    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:30.586422    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:30.586541    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:32.814531    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:32.814564    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:33.815259    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:35.722593    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:35.723495    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:35.723495    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:38.134812    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:38.134812    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:38.134812    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:39.998267    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:39.998267    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:39.998267    3340 machine.go:93] provisionDockerMachine start ...
	I0923 12:12:39.998267    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:41.857405    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:41.857405    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:41.857405    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:44.056159    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:44.056355    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:44.060816    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:12:44.071225    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:12:44.072228    3340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:12:44.204546    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:12:44.204716    3340 buildroot.go:166] provisioning hostname "ha-565300"
	I0923 12:12:44.204716    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:46.026771    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:46.027420    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:46.027420    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:48.183257    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:48.183257    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:48.187828    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:12:48.188088    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:12:48.188088    3340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565300 && echo "ha-565300" | sudo tee /etc/hostname
	I0923 12:12:48.333214    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565300
	
	I0923 12:12:48.333214    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:50.153533    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:50.153533    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:50.154332    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:52.281467    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:52.281663    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:52.284766    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:12:52.285377    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:12:52.285377    3340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:12:52.421094    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:12:52.421094    3340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 12:12:52.421094    3340 buildroot.go:174] setting up certificates
	I0923 12:12:52.421094    3340 provision.go:84] configureAuth start
	I0923 12:12:52.421094    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:54.288167    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:54.289169    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:54.289339    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:56.464735    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:56.464932    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:56.464932    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:58.272016    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:58.272357    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:58.272357    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:00.489531    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:00.490357    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:00.490357    3340 provision.go:143] copyHostCerts
	I0923 12:13:00.490504    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 12:13:00.490742    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:13:00.490815    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 12:13:00.490951    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 12:13:00.492216    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 12:13:00.492388    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:13:00.492469    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 12:13:00.492735    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 12:13:00.493436    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 12:13:00.493623    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:13:00.493705    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 12:13:00.493873    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:13:00.494709    3340 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-565300 san=[127.0.0.1 172.19.146.194 ha-565300 localhost minikube]
	I0923 12:13:00.640683    3340 provision.go:177] copyRemoteCerts
	I0923 12:13:00.648701    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:13:00.648701    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:02.519203    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:02.519203    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:02.519304    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:04.702811    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:04.702811    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:04.704396    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:13:04.808977    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1599958s)
	I0923 12:13:04.809182    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 12:13:04.809924    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:13:04.851429    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 12:13:04.851996    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0923 12:13:04.894522    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 12:13:04.894963    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:13:04.934660    3340 provision.go:87] duration metric: took 12.5126428s to configureAuth
	I0923 12:13:04.934718    3340 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:13:04.935568    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:13:04.935639    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:06.758111    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:06.758111    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:06.758111    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:08.929281    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:08.929813    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:08.933512    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:08.933512    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:08.933512    3340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:13:09.064168    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:13:09.064168    3340 buildroot.go:70] root file system type: tmpfs
	I0923 12:13:09.064168    3340 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:13:09.064168    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:10.884061    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:10.884061    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:10.884061    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:13.065579    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:13.065579    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:13.069481    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:13.069865    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:13.069935    3340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:13:13.208408    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:13:13.208928    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:15.042932    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:15.042932    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:15.043293    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:17.257188    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:17.257188    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:17.261145    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:17.261483    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:17.261574    3340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:13:19.349851    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:13:19.350460    3340 machine.go:96] duration metric: took 39.3494836s to provisionDockerMachine
	I0923 12:13:19.350460    3340 client.go:171] duration metric: took 1m41.4598151s to LocalClient.Create
	I0923 12:13:19.350460    3340 start.go:167] duration metric: took 1m41.4598151s to libmachine.API.Create "ha-565300"
	I0923 12:13:19.350460    3340 start.go:293] postStartSetup for "ha-565300" (driver="hyperv")
	I0923 12:13:19.350460    3340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:13:19.358805    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:13:19.358805    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:21.205616    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:21.206341    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:21.206341    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:23.395568    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:23.395568    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:23.396574    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:13:23.498654    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1395702s)
	I0923 12:13:23.506687    3340 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:13:23.514435    3340 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:13:23.514435    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 12:13:23.515075    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 12:13:23.515887    3340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 12:13:23.515887    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 12:13:23.526331    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:13:23.542677    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 12:13:23.585065    3340 start.go:296] duration metric: took 4.2343196s for postStartSetup
	I0923 12:13:23.588975    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:25.439069    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:25.439069    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:25.439403    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:27.644756    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:27.644756    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:27.645714    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:13:27.647742    3340 start.go:128] duration metric: took 1m49.7595184s to createHost
	I0923 12:13:27.647742    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:29.482844    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:29.482844    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:29.483258    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:31.661842    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:31.662840    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:31.666349    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:31.666876    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:31.666876    3340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:13:31.782986    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727093611.991103345
	
	I0923 12:13:31.783098    3340 fix.go:216] guest clock: 1727093611.991103345
	I0923 12:13:31.783098    3340 fix.go:229] Guest: 2024-09-23 12:13:31.991103345 +0000 UTC Remote: 2024-09-23 12:13:27.6477425 +0000 UTC m=+114.732820001 (delta=4.343360845s)
	I0923 12:13:31.783244    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:33.636121    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:33.636121    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:33.636517    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:35.803805    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:35.803805    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:35.809876    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:35.810324    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:35.810324    3340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727093611
	I0923 12:13:35.947901    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 12:13:31 UTC 2024
	
	I0923 12:13:35.947901    3340 fix.go:236] clock set: Mon Sep 23 12:13:31 UTC 2024
	 (err=<nil>)
	I0923 12:13:35.947901    3340 start.go:83] releasing machines lock for "ha-565300", held for 1m58.0595495s
	I0923 12:13:35.948449    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:37.799890    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:37.799890    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:37.799890    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:39.965345    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:39.965345    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:39.969481    3340 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 12:13:39.969548    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:39.975699    3340 ssh_runner.go:195] Run: cat /version.json
	I0923 12:13:39.976250    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:41.873769    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:41.873769    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:41.873959    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:41.877437    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:41.877437    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:41.877547    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:44.154760    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:44.154835    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:44.154985    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:13:44.177520    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:44.177685    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:44.177969    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:13:44.254371    3340 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.2845337s)
	W0923 12:13:44.254524    3340 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 12:13:44.269720    3340 ssh_runner.go:235] Completed: cat /version.json: (4.2931809s)
	I0923 12:13:44.279758    3340 ssh_runner.go:195] Run: systemctl --version
	I0923 12:13:44.295780    3340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:13:44.303293    3340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:13:44.311767    3340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:13:44.338154    3340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:13:44.338269    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:13:44.338442    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0923 12:13:44.352524    3340 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 12:13:44.352524    3340 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 12:13:44.383799    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:13:44.416135    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:13:44.439326    3340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:13:44.452051    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:13:44.483142    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:13:44.508284    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:13:44.536304    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:13:44.562583    3340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:13:44.588400    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:13:44.614675    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:13:44.643861    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:13:44.670729    3340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:13:44.688132    3340 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:13:44.696881    3340 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:13:44.728075    3340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:13:44.750972    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:13:44.910487    3340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:13:44.936784    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:13:44.948451    3340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:13:44.979227    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:13:45.008231    3340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:13:45.046366    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:13:45.078463    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:13:45.110043    3340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:13:45.172398    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:13:45.194360    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:13:45.234926    3340 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:13:45.253370    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:13:45.268961    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:13:45.304239    3340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:13:45.467376    3340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:13:45.637903    3340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:13:45.638248    3340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:13:45.679959    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:13:45.867148    3340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:13:48.398044    3340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5305671s)
	I0923 12:13:48.409146    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:13:48.440062    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:13:48.469276    3340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:13:48.649341    3340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:13:48.842024    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:13:49.026941    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:13:49.065458    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:13:49.094000    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:13:49.265557    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:13:49.368777    3340 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:13:49.379784    3340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:13:49.388737    3340 start.go:563] Will wait 60s for crictl version
	I0923 12:13:49.398183    3340 ssh_runner.go:195] Run: which crictl
	I0923 12:13:49.412352    3340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:13:49.458715    3340 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:13:49.470375    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:13:49.506083    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:13:49.537704    3340 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:13:49.537896    3340 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 12:13:49.541799    3340 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 12:13:49.541799    3340 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 12:13:49.541799    3340 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 12:13:49.541799    3340 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 12:13:49.544485    3340 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 12:13:49.544485    3340 ip.go:214] interface addr: 172.19.144.1/20
	I0923 12:13:49.552109    3340 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 12:13:49.558776    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:13:49.588336    3340 kubeadm.go:883] updating cluster {Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:13:49.588336    3340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:13:49.594106    3340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:13:49.616419    3340 docker.go:685] Got preloaded images: 
	I0923 12:13:49.616419    3340 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0923 12:13:49.624338    3340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 12:13:49.652236    3340 ssh_runner.go:195] Run: which lz4
	I0923 12:13:49.656985    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 12:13:49.664592    3340 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:13:49.670654    3340 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:13:49.671659    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0923 12:13:51.321946    3340 docker.go:649] duration metric: took 1.6648483s to copy over tarball
	I0923 12:13:51.329495    3340 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:13:59.788772    3340 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4587063s)
	I0923 12:13:59.788918    3340 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:13:59.853820    3340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 12:13:59.870819    3340 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0923 12:13:59.910818    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:00.085897    3340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:14:03.345536    3340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2592847s)
	I0923 12:14:03.356190    3340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:14:03.379957    3340 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 12:14:03.380036    3340 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:14:03.380036    3340 kubeadm.go:934] updating node { 172.19.146.194 8443 v1.31.1 docker true true} ...
	I0923 12:14:03.380151    3340 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.146.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:14:03.387851    3340 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 12:14:03.444198    3340 cni.go:84] Creating CNI manager for ""
	I0923 12:14:03.444198    3340 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:14:03.444198    3340 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:14:03.444198    3340 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.146.194 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565300 NodeName:ha-565300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.146.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.146.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:14:03.444198    3340 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.146.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-565300"
	  kubeletExtraArgs:
	    node-ip: 172.19.146.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.146.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:14:03.444198    3340 kube-vip.go:115] generating kube-vip config ...
	I0923 12:14:03.453177    3340 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:14:03.474900    3340 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:14:03.475151    3340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:14:03.484633    3340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:14:03.504138    3340 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:14:03.511511    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 12:14:03.526318    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0923 12:14:03.551491    3340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:14:03.575933    3340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 12:14:03.600955    3340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 12:14:03.641331    3340 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:14:03.647196    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:14:03.674201    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:03.836279    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:14:03.862792    3340 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300 for IP: 172.19.146.194
	I0923 12:14:03.862889    3340 certs.go:194] generating shared ca certs ...
	I0923 12:14:03.862889    3340 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:03.863803    3340 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 12:14:03.864593    3340 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 12:14:03.864593    3340 certs.go:256] generating profile certs ...
	I0923 12:14:03.865564    3340 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key
	I0923 12:14:03.865632    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.crt with IP's: []
	I0923 12:14:04.034779    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.crt ...
	I0923 12:14:04.034779    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.crt: {Name:mk0eabf58bc28b7e88916d61fb2acdce8c8c3d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.036783    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key ...
	I0923 12:14:04.036783    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key: {Name:mkb5e6f177eab2a657ef89ec7acff0020110aa26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.037791    3340 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.e4c40462
	I0923 12:14:04.037791    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.e4c40462 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.146.194 172.19.159.254]
	I0923 12:14:04.247095    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.e4c40462 ...
	I0923 12:14:04.247095    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.e4c40462: {Name:mk721a003060e4989528317e20d96954efec0127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.249171    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.e4c40462 ...
	I0923 12:14:04.249171    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.e4c40462: {Name:mkdccd811da24fa2e143615d68ba9562a3f3cdb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.250529    3340 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.e4c40462 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt
	I0923 12:14:04.263847    3340 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.e4c40462 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key
	I0923 12:14:04.266370    3340 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key
	I0923 12:14:04.266370    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt with IP's: []
	I0923 12:14:04.415661    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt ...
	I0923 12:14:04.415661    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt: {Name:mk7ac3327e52fa143763dcdc0dbe2ce5fae95d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.417193    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key ...
	I0923 12:14:04.417193    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key: {Name:mk6acac745e732e2160ab3ac3ed54a7d89e8268a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.417444    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:14:04.418424    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:14:04.418736    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:14:04.419011    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:14:04.419251    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:14:04.419495    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:14:04.419581    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:14:04.427624    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:14:04.428632    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 12:14:04.429022    3340 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 12:14:04.429022    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 12:14:04.429220    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 12:14:04.429473    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 12:14:04.429670    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 12:14:04.429670    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 12:14:04.429670    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:04.429670    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 12:14:04.429670    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 12:14:04.431901    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:14:04.473185    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 12:14:04.511961    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:14:04.551336    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:14:04.589323    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 12:14:04.632564    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:14:04.671881    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:14:04.718355    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:14:04.757603    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:14:04.800716    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 12:14:04.845540    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 12:14:04.886961    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:14:04.921829    3340 ssh_runner.go:195] Run: openssl version
	I0923 12:14:04.939061    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:14:04.962860    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:04.969622    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:04.981633    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:04.999213    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:14:05.022285    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 12:14:05.048613    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 12:14:05.055517    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 12:14:05.064282    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 12:14:05.080660    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 12:14:05.105258    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 12:14:05.133343    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 12:14:05.139063    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 12:14:05.146734    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 12:14:05.161913    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:14:05.186221    3340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:14:05.192271    3340 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:14:05.192271    3340 kubeadm.go:392] StartCluster: {Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:14:05.203356    3340 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 12:14:05.233359    3340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:14:05.255933    3340 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:14:05.280795    3340 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:14:05.295765    3340 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:14:05.295765    3340 kubeadm.go:157] found existing configuration files:
	
	I0923 12:14:05.306772    3340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:14:05.320875    3340 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:14:05.327388    3340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:14:05.349632    3340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:14:05.362718    3340 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:14:05.370850    3340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:14:05.398148    3340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:14:05.412983    3340 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:14:05.422172    3340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:14:05.445948    3340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:14:05.460862    3340 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:14:05.470799    3340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:14:05.485970    3340 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 12:14:05.683113    3340 kubeadm.go:310] W0923 12:14:05.893800    1762 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:14:05.684872    3340 kubeadm.go:310] W0923 12:14:05.895003    1762 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:14:05.803915    3340 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:14:17.868487    3340 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:14:17.868749    3340 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:14:17.868859    3340 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:14:17.869156    3340 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:14:17.869459    3340 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:14:17.869749    3340 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:14:17.872183    3340 out.go:235]   - Generating certificates and keys ...
	I0923 12:14:17.872822    3340 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:14:17.873082    3340 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:14:17.873082    3340 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:14:17.873082    3340 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:14:17.873082    3340 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:14:17.873607    3340 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:14:17.873838    3340 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:14:17.874064    3340 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-565300 localhost] and IPs [172.19.146.194 127.0.0.1 ::1]
	I0923 12:14:17.874231    3340 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:14:17.874475    3340 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-565300 localhost] and IPs [172.19.146.194 127.0.0.1 ::1]
	I0923 12:14:17.874642    3340 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:14:17.874781    3340 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:14:17.874863    3340 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:14:17.874930    3340 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:14:17.875064    3340 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:14:17.875330    3340 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:14:17.875399    3340 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:14:17.875638    3340 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:14:17.875638    3340 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:14:17.875638    3340 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:14:17.875638    3340 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:14:17.879172    3340 out.go:235]   - Booting up control plane ...
	I0923 12:14:17.879172    3340 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:14:17.879172    3340 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:14:17.879172    3340 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:14:17.879172    3340 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:14:17.880176    3340 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:14:17.880248    3340 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:14:17.880248    3340 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:14:17.880248    3340 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:14:17.880248    3340 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.078066ms
	I0923 12:14:17.881007    3340 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:14:17.881130    3340 kubeadm.go:310] [api-check] The API server is healthy after 7.002449166s
	I0923 12:14:17.881406    3340 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:14:17.881662    3340 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:14:17.881786    3340 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:14:17.882206    3340 kubeadm.go:310] [mark-control-plane] Marking the node ha-565300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:14:17.882206    3340 kubeadm.go:310] [bootstrap-token] Using token: w22tpi.aqmh61cssdet6ypg
	I0923 12:14:17.884602    3340 out.go:235]   - Configuring RBAC rules ...
	I0923 12:14:17.884602    3340 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:14:17.884602    3340 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:14:17.885197    3340 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:14:17.885197    3340 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:14:17.885798    3340 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:14:17.885863    3340 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:14:17.885863    3340 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:14:17.885863    3340 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:14:17.886404    3340 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:14:17.886404    3340 kubeadm.go:310] 
	I0923 12:14:17.886529    3340 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:14:17.886578    3340 kubeadm.go:310] 
	I0923 12:14:17.886643    3340 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:14:17.886643    3340 kubeadm.go:310] 
	I0923 12:14:17.886643    3340 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:14:17.886643    3340 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:14:17.887243    3340 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:14:17.887243    3340 kubeadm.go:310] 
	I0923 12:14:17.887243    3340 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:14:17.887243    3340 kubeadm.go:310] 
	I0923 12:14:17.887243    3340 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:14:17.887243    3340 kubeadm.go:310] 
	I0923 12:14:17.887243    3340 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:14:17.887243    3340 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:14:17.887243    3340 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:14:17.887784    3340 kubeadm.go:310] 
	I0923 12:14:17.887819    3340 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:14:17.887819    3340 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:14:17.887819    3340 kubeadm.go:310] 
	I0923 12:14:17.887819    3340 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w22tpi.aqmh61cssdet6ypg \
	I0923 12:14:17.888728    3340 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 \
	I0923 12:14:17.888792    3340 kubeadm.go:310] 	--control-plane 
	I0923 12:14:17.888849    3340 kubeadm.go:310] 
	I0923 12:14:17.888974    3340 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:14:17.889032    3340 kubeadm.go:310] 
	I0923 12:14:17.889136    3340 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w22tpi.aqmh61cssdet6ypg \
	I0923 12:14:17.889406    3340 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 12:14:17.889468    3340 cni.go:84] Creating CNI manager for ""
	I0923 12:14:17.889527    3340 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:14:17.897652    3340 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 12:14:17.908154    3340 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 12:14:17.916478    3340 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 12:14:17.916478    3340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 12:14:17.965288    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 12:14:18.461915    3340 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:14:18.474690    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565300 minikube.k8s.io/updated_at=2024_09_23T12_14_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-565300 minikube.k8s.io/primary=true
	I0923 12:14:18.474690    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:18.515581    3340 ops.go:34] apiserver oom_adj: -16
	I0923 12:14:18.737205    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:19.239531    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:19.739080    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:20.237271    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:20.739765    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:21.239513    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:21.738315    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:22.240571    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:22.738499    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:22.856080    3340 kubeadm.go:1113] duration metric: took 4.3938678s to wait for elevateKubeSystemPrivileges
	I0923 12:14:22.856080    3340 kubeadm.go:394] duration metric: took 17.6626169s to StartCluster
	I0923 12:14:22.856080    3340 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:22.856080    3340 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:14:22.857047    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:22.858063    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:14:22.858063    3340 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:14:22.859047    3340 start.go:241] waiting for startup goroutines ...
	I0923 12:14:22.858063    3340 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:14:22.859047    3340 addons.go:69] Setting storage-provisioner=true in profile "ha-565300"
	I0923 12:14:22.859047    3340 addons.go:234] Setting addon storage-provisioner=true in "ha-565300"
	I0923 12:14:22.859047    3340 addons.go:69] Setting default-storageclass=true in profile "ha-565300"
	I0923 12:14:22.859047    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:14:22.859047    3340 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565300"
	I0923 12:14:22.859047    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:14:22.860051    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:22.860051    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:23.033456    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:14:23.390318    3340 start.go:971] {"host.minikube.internal": 172.19.144.1} host record injected into CoreDNS's ConfigMap
	I0923 12:14:24.949642    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:24.949642    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:24.949642    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:24.949642    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:24.951392    3340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:14:24.952215    3340 kapi.go:59] client config for ha-565300: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 12:14:24.953797    3340 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 12:14:24.954296    3340 addons.go:234] Setting addon default-storageclass=true in "ha-565300"
	I0923 12:14:24.954373    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:14:24.954455    3340 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:14:24.955415    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:24.956883    3340 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:14:24.956883    3340 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:14:24.956883    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:26.988410    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:26.988410    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:26.989075    3340 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:14:26.989075    3340 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:14:26.989202    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:27.139553    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:27.139553    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:27.139553    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:14:29.054903    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:29.054903    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:29.055017    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:14:29.502669    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:14:29.503094    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:29.503525    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:14:29.641979    3340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:14:30.824370    3340 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1823118s)
	I0923 12:14:31.376452    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:14:31.377290    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:31.377660    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:14:31.497740    3340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:14:31.634571    3340 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 12:14:31.634571    3340 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 12:14:31.634789    3340 round_trippers.go:463] GET https://172.19.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 12:14:31.634857    3340 round_trippers.go:469] Request Headers:
	I0923 12:14:31.634912    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:14:31.634912    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:14:31.648620    3340 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0923 12:14:31.650596    3340 round_trippers.go:463] PUT https://172.19.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 12:14:31.650596    3340 round_trippers.go:469] Request Headers:
	I0923 12:14:31.650596    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:14:31.650596    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:14:31.650596    3340 round_trippers.go:473]     Content-Type: application/json
	I0923 12:14:31.654188    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:14:31.658559    3340 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 12:14:31.663195    3340 addons.go:510] duration metric: took 8.8045375s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 12:14:31.663195    3340 start.go:246] waiting for cluster config update ...
	I0923 12:14:31.663195    3340 start.go:255] writing updated cluster config ...
	I0923 12:14:31.666015    3340 out.go:201] 
	I0923 12:14:31.677945    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:14:31.678072    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:14:31.682512    3340 out.go:177] * Starting "ha-565300-m02" control-plane node in "ha-565300" cluster
	I0923 12:14:31.689670    3340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:14:31.689670    3340 cache.go:56] Caching tarball of preloaded images
	I0923 12:14:31.689670    3340 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:14:31.689670    3340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:14:31.689670    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:14:31.691637    3340 start.go:360] acquireMachinesLock for ha-565300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:14:31.691637    3340 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-565300-m02"
	I0923 12:14:31.692634    3340 start.go:93] Provisioning new machine with config: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:14:31.692634    3340 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0923 12:14:31.697650    3340 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:14:31.697650    3340 start.go:159] libmachine.API.Create for "ha-565300" (driver="hyperv")
	I0923 12:14:31.697650    3340 client.go:168] LocalClient.Create starting
	I0923 12:14:31.697650    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:14:31.698634    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 12:14:33.393825    3340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 12:14:33.393825    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:33.394161    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 12:14:34.890523    3340 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 12:14:34.890596    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:34.890665    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:14:36.204595    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:14:36.204595    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:36.205483    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:14:39.318731    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:14:39.319414    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:39.321702    3340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:14:39.699798    3340 main.go:141] libmachine: Creating SSH key...
	I0923 12:14:39.810406    3340 main.go:141] libmachine: Creating VM...
	I0923 12:14:39.810406    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:14:42.290510    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:14:42.290510    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:42.290510    3340 main.go:141] libmachine: Using switch "Default Switch"
	I0923 12:14:42.290510    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:14:43.810395    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:14:43.810395    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:43.810395    3340 main.go:141] libmachine: Creating VHD
	I0923 12:14:43.810395    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 12:14:47.241762    3340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B2171BFB-757A-4D97-9114-8CA0521DECDD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 12:14:47.241762    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:47.241762    3340 main.go:141] libmachine: Writing magic tar header
	I0923 12:14:47.241946    3340 main.go:141] libmachine: Writing SSH key tar header
	I0923 12:14:47.251054    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 12:14:50.127124    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:14:50.127124    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:50.127124    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\disk.vhd' -SizeBytes 20000MB
	I0923 12:14:52.372418    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:14:52.372418    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:52.372515    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-565300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0923 12:14:55.561341    3340 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-565300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 12:14:55.561341    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:55.561341    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-565300-m02 -DynamicMemoryEnabled $false
	I0923 12:14:57.514586    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:14:57.514586    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:57.514586    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-565300-m02 -Count 2
	I0923 12:14:59.371303    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:14:59.371303    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:59.371303    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-565300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\boot2docker.iso'
	I0923 12:15:01.620130    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:01.620232    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:01.620232    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-565300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\disk.vhd'
	I0923 12:15:03.988986    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:03.988986    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:03.988986    3340 main.go:141] libmachine: Starting VM...
	I0923 12:15:03.990042    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-565300-m02
	I0923 12:15:06.810744    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:06.811779    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:06.811803    3340 main.go:141] libmachine: Waiting for host to start...
	I0923 12:15:06.811968    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:08.834938    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:08.834938    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:08.834938    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:11.073616    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:11.074308    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:12.074445    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:14.037868    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:14.037868    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:14.037868    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:16.335915    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:16.335915    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:17.336652    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:19.270193    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:19.270922    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:19.271002    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:21.529156    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:21.529156    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:22.529767    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:24.550500    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:24.550500    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:24.550500    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:26.814416    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:26.814727    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:27.815707    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:29.813008    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:29.813764    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:29.813914    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:32.106185    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:32.106709    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:32.106709    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:34.030921    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:34.030921    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:34.030921    3340 machine.go:93] provisionDockerMachine start ...
	I0923 12:15:34.031043    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:35.921907    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:35.921958    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:35.921958    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:38.228217    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:38.228217    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:38.232324    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:15:38.244886    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:15:38.244886    3340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:15:38.381661    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:15:38.381661    3340 buildroot.go:166] provisioning hostname "ha-565300-m02"
	I0923 12:15:38.381661    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:40.290359    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:40.290359    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:40.291025    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:42.505424    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:42.505424    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:42.510206    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:15:42.510491    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:15:42.510491    3340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565300-m02 && echo "ha-565300-m02" | sudo tee /etc/hostname
	I0923 12:15:42.672016    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565300-m02
	
	I0923 12:15:42.672051    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:44.502023    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:44.502023    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:44.502093    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:46.766612    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:46.766612    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:46.770837    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:15:46.771025    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:15:46.771025    3340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:15:46.927497    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:15:46.927497    3340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 12:15:46.927615    3340 buildroot.go:174] setting up certificates
	I0923 12:15:46.927615    3340 provision.go:84] configureAuth start
	I0923 12:15:46.927673    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:48.805075    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:48.805122    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:48.805122    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:50.983284    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:50.983284    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:50.983363    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:52.836297    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:52.836297    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:52.836798    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:55.084445    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:55.085383    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:55.085383    3340 provision.go:143] copyHostCerts
	I0923 12:15:55.085519    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 12:15:55.085724    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:15:55.085724    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 12:15:55.086093    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 12:15:55.086951    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 12:15:55.087140    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:15:55.087140    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 12:15:55.087372    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 12:15:55.087603    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 12:15:55.088230    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:15:55.088230    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 12:15:55.088597    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:15:55.089352    3340 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-565300-m02 san=[127.0.0.1 172.19.154.133 ha-565300-m02 localhost minikube]
	I0923 12:15:55.237599    3340 provision.go:177] copyRemoteCerts
	I0923 12:15:55.245799    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:15:55.245799    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:57.143033    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:57.143374    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:57.143374    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:59.417727    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:59.417727    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:59.418221    3340 sshutil.go:53] new ssh client: &{IP:172.19.154.133 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:15:59.523230    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.277142s)
	I0923 12:15:59.523230    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 12:15:59.523856    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:15:59.568229    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 12:15:59.568578    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:15:59.619610    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 12:15:59.620175    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 12:15:59.680492    3340 provision.go:87] duration metric: took 12.7520167s to configureAuth
	I0923 12:15:59.680492    3340 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:15:59.681114    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:15:59.681114    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:01.514582    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:01.514582    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:01.514582    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:03.756953    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:03.756953    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:03.761436    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:03.761513    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:03.761513    3340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:16:03.913754    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:16:03.913754    3340 buildroot.go:70] root file system type: tmpfs
	I0923 12:16:03.913977    3340 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:16:03.913977    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:05.738270    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:05.738790    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:05.738870    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:07.938357    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:07.938357    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:07.944182    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:07.944447    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:07.944447    3340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.146.194"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:16:08.114661    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.146.194
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:16:08.114661    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:09.917150    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:09.917150    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:09.917150    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:12.084333    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:12.084333    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:12.088083    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:12.088736    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:12.088736    3340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:16:14.229491    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:16:14.229547    3340 machine.go:96] duration metric: took 40.1959123s to provisionDockerMachine
	I0923 12:16:14.229547    3340 client.go:171] duration metric: took 1m42.5249759s to LocalClient.Create
	I0923 12:16:14.229609    3340 start.go:167] duration metric: took 1m42.5250373s to libmachine.API.Create "ha-565300"
	I0923 12:16:14.229609    3340 start.go:293] postStartSetup for "ha-565300-m02" (driver="hyperv")
	I0923 12:16:14.229609    3340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:16:14.237881    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:16:14.237881    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:16.067841    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:16.068798    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:16.068798    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:18.256548    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:18.256548    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:18.257375    3340 sshutil.go:53] new ssh client: &{IP:172.19.154.133 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:16:18.361095    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1229356s)
	I0923 12:16:18.370219    3340 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:16:18.376220    3340 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:16:18.376220    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 12:16:18.376220    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 12:16:18.377262    3340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 12:16:18.377262    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 12:16:18.385398    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:16:18.402430    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 12:16:18.444430    3340 start.go:296] duration metric: took 4.214537s for postStartSetup
	I0923 12:16:18.445739    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:20.267780    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:20.268197    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:20.268271    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:22.464309    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:22.464309    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:22.464712    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:16:22.465968    3340 start.go:128] duration metric: took 1m50.765856s to createHost
	I0923 12:16:22.466486    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:24.341673    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:24.341731    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:24.341731    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:26.565660    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:26.565660    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:26.569433    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:26.570037    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:26.570037    3340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:16:26.705309    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727093786.909468732
	
	I0923 12:16:26.705344    3340 fix.go:216] guest clock: 1727093786.909468732
	I0923 12:16:26.705344    3340 fix.go:229] Guest: 2024-09-23 12:16:26.909468732 +0000 UTC Remote: 2024-09-23 12:16:22.465968 +0000 UTC m=+289.539245301 (delta=4.443500732s)
	I0923 12:16:26.705406    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:28.568250    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:28.568250    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:28.568250    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:30.803092    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:30.803092    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:30.809178    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:30.809926    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:30.809926    3340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727093786
	I0923 12:16:30.958623    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 12:16:26 UTC 2024
	
	I0923 12:16:30.958623    3340 fix.go:236] clock set: Mon Sep 23 12:16:26 UTC 2024
	 (err=<nil>)
	I0923 12:16:30.958623    3340 start.go:83] releasing machines lock for "ha-565300-m02", held for 1m59.2589358s
	I0923 12:16:30.959627    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:32.821202    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:32.821202    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:32.821278    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:35.073211    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:35.073854    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:35.077287    3340 out.go:177] * Found network options:
	I0923 12:16:35.080813    3340 out.go:177]   - NO_PROXY=172.19.146.194
	W0923 12:16:35.083035    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:16:35.084591    3340 out.go:177]   - NO_PROXY=172.19.146.194
	W0923 12:16:35.087423    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:16:35.089375    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:16:35.091192    3340 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 12:16:35.091333    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:35.098072    3340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:16:35.098630    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:37.015570    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:37.015570    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:37.016469    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:37.017547    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:37.017547    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:37.017547    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:39.297171    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:39.297171    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:39.297171    3340 sshutil.go:53] new ssh client: &{IP:172.19.154.133 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:16:39.320058    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:39.320058    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:39.320670    3340 sshutil.go:53] new ssh client: &{IP:172.19.154.133 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:16:39.400156    3340 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3085541s)
	W0923 12:16:39.400235    3340 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 12:16:39.416960    3340 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3185965s)
	W0923 12:16:39.416960    3340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:16:39.427889    3340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:16:39.454650    3340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:16:39.454650    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:16:39.454817    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:16:39.498224    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0923 12:16:39.519785    3340 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 12:16:39.519785    3340 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 12:16:39.527007    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:16:39.546876    3340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:16:39.555504    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:16:39.584822    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:16:39.617002    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:16:39.648736    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:16:39.677067    3340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:16:39.705842    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:16:39.732897    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:16:39.766501    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:16:39.794268    3340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:16:39.811048    3340 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:16:39.821149    3340 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:16:39.851256    3340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:16:39.875256    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:40.056862    3340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:16:40.087721    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:16:40.098089    3340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:16:40.129191    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:16:40.166235    3340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:16:40.215789    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:16:40.246795    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:16:40.279681    3340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:16:40.336665    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:16:40.359400    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:16:40.411137    3340 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:16:40.425888    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:16:40.443058    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:16:40.483337    3340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:16:40.663725    3340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:16:40.835255    3340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:16:40.835373    3340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:16:40.882912    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:41.068033    3340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:16:43.616134    3340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5469046s)
	I0923 12:16:43.627763    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:16:43.656432    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:16:43.686542    3340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:16:43.880055    3340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:16:44.044080    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:44.216298    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:16:44.251908    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:16:44.282039    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:44.462138    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:16:44.560104    3340 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:16:44.569076    3340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:16:44.576755    3340 start.go:563] Will wait 60s for crictl version
	I0923 12:16:44.584435    3340 ssh_runner.go:195] Run: which crictl
	I0923 12:16:44.602392    3340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:16:44.652255    3340 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:16:44.658900    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:16:44.693724    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:16:44.726505    3340 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:16:44.729492    3340 out.go:177]   - env NO_PROXY=172.19.146.194
	I0923 12:16:44.732487    3340 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 12:16:44.734497    3340 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 12:16:44.734497    3340 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 12:16:44.736383    3340 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 12:16:44.736383    3340 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 12:16:44.739193    3340 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 12:16:44.739286    3340 ip.go:214] interface addr: 172.19.144.1/20
	I0923 12:16:44.748307    3340 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 12:16:44.754987    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:16:44.776904    3340 mustload.go:65] Loading cluster: ha-565300
	I0923 12:16:44.777295    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:16:44.777900    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:16:46.603753    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:46.604398    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:46.604398    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:16:46.604855    3340 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300 for IP: 172.19.154.133
	I0923 12:16:46.604855    3340 certs.go:194] generating shared ca certs ...
	I0923 12:16:46.604855    3340 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:16:46.605628    3340 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 12:16:46.605628    3340 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 12:16:46.605628    3340 certs.go:256] generating profile certs ...
	I0923 12:16:46.606492    3340 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key
	I0923 12:16:46.606552    3340 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.d6e24336
	I0923 12:16:46.606552    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.d6e24336 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.146.194 172.19.154.133 172.19.159.254]
	I0923 12:16:46.702433    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.d6e24336 ...
	I0923 12:16:46.702433    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.d6e24336: {Name:mkf65afc351c4cfc9398fe8eef0be9bde7269a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:16:46.703961    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.d6e24336 ...
	I0923 12:16:46.703961    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.d6e24336: {Name:mkfaaf958dc4b0425649b8bb0994634b6b271bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:16:46.705334    3340 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.d6e24336 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt
	I0923 12:16:46.720514    3340 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.d6e24336 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key
	I0923 12:16:46.721504    3340 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key
	I0923 12:16:46.721504    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:16:46.721676    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:16:46.721890    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:16:46.721890    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:16:46.721890    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:16:46.721890    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:16:46.722520    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:16:46.723162    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:16:46.723652    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 12:16:46.724002    3340 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 12:16:46.724090    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 12:16:46.724223    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 12:16:46.724223    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 12:16:46.724742    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 12:16:46.724992    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 12:16:46.724992    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 12:16:46.724992    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:16:46.725593    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 12:16:46.725799    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:16:48.599986    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:48.599986    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:48.600086    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:50.832198    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:16:50.832198    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:50.832512    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:16:50.931245    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:16:50.938436    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:16:50.964515    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:16:50.970662    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 12:16:50.999503    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:16:51.006390    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:16:51.033869    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:16:51.047052    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:16:51.075869    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:16:51.085065    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:16:51.111808    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:16:51.118184    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0923 12:16:51.136209    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:16:51.183105    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 12:16:51.227159    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:16:51.269252    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:16:51.312155    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 12:16:51.354245    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:16:51.400171    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:16:51.441761    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:16:51.493519    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 12:16:51.534979    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:16:51.579338    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 12:16:51.622021    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:16:51.653903    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 12:16:51.683130    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:16:51.713551    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:16:51.743561    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:16:51.773348    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0923 12:16:51.804688    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:16:51.840803    3340 ssh_runner.go:195] Run: openssl version
	I0923 12:16:51.856691    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 12:16:51.884344    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 12:16:51.890462    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 12:16:51.899296    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 12:16:51.916454    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:16:51.942320    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:16:51.970014    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:16:51.977814    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:16:51.986251    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:16:52.002533    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:16:52.032169    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 12:16:52.058120    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 12:16:52.064911    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 12:16:52.077167    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 12:16:52.093304    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 12:16:52.122180    3340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:16:52.129371    3340 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:16:52.129743    3340 kubeadm.go:934] updating node {m02 172.19.154.133 8443 v1.31.1 docker true true} ...
	I0923 12:16:52.129957    3340 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.154.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:16:52.130073    3340 kube-vip.go:115] generating kube-vip config ...
	I0923 12:16:52.139733    3340 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:16:52.167402    3340 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:16:52.167681    3340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:16:52.178341    3340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:16:52.199159    3340 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:16:52.209676    3340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:16:52.228973    3340 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl
	I0923 12:16:52.229254    3340 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet
	I0923 12:16:52.229295    3340 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm
	I0923 12:16:53.262831    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:16:53.270837    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:16:53.278585    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:16:53.279074    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:16:53.295258    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:16:53.304355    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:16:53.378529    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:16:53.378986    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:16:53.404280    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:16:53.457167    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:16:53.466118    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:16:53.491705    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:16:53.491841    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:16:54.397217    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:16:54.413473    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0923 12:16:54.442021    3340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:16:54.470289    3340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:16:54.508205    3340 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:16:54.515096    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:16:54.545175    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:54.730833    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:16:54.756927    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:16:54.757398    3340 start.go:317] joinCluster: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:16:54.757398    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:16:54.757398    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:16:56.573976    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:56.573976    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:56.574533    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:58.807739    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:16:58.807739    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:58.808354    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:16:59.140017    3340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3822165s)
	I0923 12:16:59.140166    3340 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:16:59.140252    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 37697z.9t4d8g449fg2twj4 --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-565300-m02 --control-plane --apiserver-advertise-address=172.19.154.133 --apiserver-bind-port=8443"
	I0923 12:17:42.171741    3340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 37697z.9t4d8g449fg2twj4 --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-565300-m02 --control-plane --apiserver-advertise-address=172.19.154.133 --apiserver-bind-port=8443": (43.0285844s)
	I0923 12:17:42.171741    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:17:42.897277    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565300-m02 minikube.k8s.io/updated_at=2024_09_23T12_17_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-565300 minikube.k8s.io/primary=false
	I0923 12:17:43.084341    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565300-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:17:43.228725    3340 start.go:319] duration metric: took 48.4680554s to joinCluster
	I0923 12:17:43.228914    3340 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:17:43.229664    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:17:43.231397    3340 out.go:177] * Verifying Kubernetes components...
	I0923 12:17:43.241851    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:17:43.537872    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:17:43.558676    3340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:17:43.559325    3340 kapi.go:59] client config for ha-565300: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:17:43.559455    3340 kubeadm.go:483] Overriding stale ClientConfig host https://172.19.159.254:8443 with https://172.19.146.194:8443
	I0923 12:17:43.560194    3340 node_ready.go:35] waiting up to 6m0s for node "ha-565300-m02" to be "Ready" ...
	I0923 12:17:43.560444    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:43.560444    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:43.560444    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:43.560516    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:43.577509    3340 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0923 12:17:44.060739    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:44.060739    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:44.060739    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:44.060739    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:44.072511    3340 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:17:44.561244    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:44.561244    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:44.561244    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:44.561244    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:44.566688    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:45.060807    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:45.060807    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:45.060807    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:45.060807    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:45.067085    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:45.561172    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:45.561172    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:45.561172    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:45.561241    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:45.566161    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:45.567483    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:46.060823    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:46.060823    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:46.060823    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:46.060823    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:46.066825    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:46.561883    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:46.561883    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:46.561883    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:46.561883    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:46.565882    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:17:47.061585    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:47.061585    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:47.061585    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:47.061585    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:47.066987    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:47.561467    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:47.561566    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:47.561566    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:47.561566    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:47.568768    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:17:47.569637    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:48.061522    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:48.061522    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:48.061522    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:48.061522    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:48.067840    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:48.561001    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:48.561001    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:48.561001    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:48.561001    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:48.566004    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:49.062042    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:49.062105    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:49.062105    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:49.062185    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:49.067078    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:49.561008    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:49.561008    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:49.561008    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:49.561008    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:49.566769    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:50.061101    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:50.061101    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:50.061101    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:50.061101    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:50.315715    3340 round_trippers.go:574] Response Status: 200 OK in 254 milliseconds
	I0923 12:17:50.316728    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:50.562144    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:50.562144    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:50.562382    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:50.562382    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:50.568182    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:51.060904    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:51.060904    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:51.060904    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:51.060904    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:51.065697    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:51.561708    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:51.561708    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:51.561708    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:51.561708    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:51.568002    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:52.061018    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:52.061018    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:52.061018    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:52.061018    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:52.066813    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:52.560912    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:52.560912    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:52.560912    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:52.560912    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:52.566702    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:52.567605    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:53.061770    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:53.062111    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:53.062111    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:53.062111    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:53.067725    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:53.561053    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:53.561053    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:53.561053    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:53.561053    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:53.567504    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:54.061453    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:54.061453    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:54.061453    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:54.061453    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:54.066814    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:54.562027    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:54.562027    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:54.562027    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:54.562027    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:54.567759    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:54.568562    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:55.062024    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:55.062024    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:55.062024    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:55.062024    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:55.066868    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:55.562855    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:55.562855    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:55.562855    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:55.562855    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:55.567901    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:56.061841    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:56.061841    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:56.061841    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:56.061841    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:56.067589    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:56.561601    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:56.561601    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:56.561601    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:56.561601    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:56.565916    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:57.061770    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:57.061770    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:57.061770    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:57.061770    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:57.066170    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:57.066899    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:57.561592    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:57.561592    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:57.561592    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:57.561592    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:57.566974    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:58.061919    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:58.061919    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:58.061919    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:58.061919    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:58.066110    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:58.562293    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:58.562293    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:58.562293    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:58.562293    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:58.567761    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:59.062415    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:59.062415    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:59.062415    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:59.062415    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:59.067197    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:59.068062    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:59.561998    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:59.561998    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:59.561998    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:59.561998    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:59.567039    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:00.061465    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:00.061465    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:00.061465    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:00.061465    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:00.067105    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:00.561586    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:00.561586    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:00.561586    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:00.561586    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:00.566690    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:01.061947    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:01.061947    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:01.061947    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:01.061947    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:01.068043    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:01.068778    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:18:01.561820    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:01.561820    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:01.561820    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:01.561820    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:01.567622    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:02.061943    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:02.061943    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:02.061943    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:02.061943    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:02.067765    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:02.562489    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:02.562489    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:02.562489    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:02.562489    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:02.574227    3340 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:18:03.062242    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:03.062242    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:03.062242    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:03.062242    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:03.068929    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:03.069243    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:18:03.561767    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:03.561767    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:03.561767    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:03.561767    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:03.567315    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:04.062124    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:04.062124    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:04.062124    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:04.062124    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:04.068387    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:04.562176    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:04.562176    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:04.562176    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:04.562176    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:04.568967    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:05.063623    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:05.063717    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.063717    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.063792    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.072290    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:18:05.072849    3340 node_ready.go:49] node "ha-565300-m02" has status "Ready":"True"
	I0923 12:18:05.072849    3340 node_ready.go:38] duration metric: took 21.5111279s for node "ha-565300-m02" to be "Ready" ...
	I0923 12:18:05.072849    3340 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:18:05.072849    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:05.072849    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.072849    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.072849    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.080783    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:18:05.091165    3340 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.091165    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jzhc
	I0923 12:18:05.091165    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.091165    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.091165    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.096479    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:05.097204    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.097204    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.097204    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.097204    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.101339    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:18:05.102805    3340 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.102805    3340 pod_ready.go:82] duration metric: took 11.6387ms for pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.102805    3340 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.102805    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kf224
	I0923 12:18:05.102805    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.102805    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.102805    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.106401    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.107685    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.107773    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.107773    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.107773    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.111017    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.111498    3340 pod_ready.go:93] pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.111564    3340 pod_ready.go:82] duration metric: took 8.759ms for pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.111564    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.111631    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300
	I0923 12:18:05.111631    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.111631    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.111631    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.115083    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.115669    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.115724    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.115724    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.115724    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.119781    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:18:05.119781    3340 pod_ready.go:93] pod "etcd-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.119781    3340 pod_ready.go:82] duration metric: took 8.2163ms for pod "etcd-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.119781    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.120319    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300-m02
	I0923 12:18:05.120319    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.120319    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.120319    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.123494    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.125026    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:05.125026    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.125026    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.125026    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.128385    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.128879    3340 pod_ready.go:93] pod "etcd-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.128879    3340 pod_ready.go:82] duration metric: took 9.0971ms for pod "etcd-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.128970    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.262871    3340 request.go:632] Waited for 133.8513ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300
	I0923 12:18:05.263165    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300
	I0923 12:18:05.263165    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.263165    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.263165    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.267122    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.463099    3340 request.go:632] Waited for 195.162ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.463694    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.463694    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.463694    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.463799    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.468772    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:18:05.469264    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.469340    3340 pod_ready.go:82] duration metric: took 340.3475ms for pod "kube-apiserver-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.469340    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.663065    3340 request.go:632] Waited for 193.7112ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m02
	I0923 12:18:05.663418    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m02
	I0923 12:18:05.663418    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.663625    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.663625    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.671500    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:18:05.863124    3340 request.go:632] Waited for 190.7906ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:05.863124    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:05.863124    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.863124    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.863124    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.868637    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:05.868926    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.869452    3340 pod_ready.go:82] duration metric: took 399.5584ms for pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.869452    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.063339    3340 request.go:632] Waited for 193.7803ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300
	I0923 12:18:06.063339    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300
	I0923 12:18:06.063339    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.063339    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.063339    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.069782    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:06.263542    3340 request.go:632] Waited for 192.2584ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:06.263542    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:06.263542    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.263542    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.263542    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.269259    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:06.270238    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:06.270320    3340 pod_ready.go:82] duration metric: took 400.8413ms for pod "kube-controller-manager-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.270320    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.463597    3340 request.go:632] Waited for 193.0966ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m02
	I0923 12:18:06.463597    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m02
	I0923 12:18:06.463597    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.463597    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.463597    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.469696    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:06.663242    3340 request.go:632] Waited for 192.1872ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:06.663723    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:06.663723    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.663855    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.663855    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.668902    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:06.669957    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:06.670073    3340 pod_ready.go:82] duration metric: took 399.6444ms for pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.670141    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzwmh" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.862982    3340 request.go:632] Waited for 192.7033ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzwmh
	I0923 12:18:06.862982    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzwmh
	I0923 12:18:06.862982    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.862982    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.862982    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.869796    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:07.063178    3340 request.go:632] Waited for 192.5882ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:07.063178    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:07.063178    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.063178    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.063178    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.068322    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:07.068848    3340 pod_ready.go:93] pod "kube-proxy-jzwmh" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:07.069033    3340 pod_ready.go:82] duration metric: took 398.7827ms for pod "kube-proxy-jzwmh" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.069033    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s4s8g" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.263439    3340 request.go:632] Waited for 194.393ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s4s8g
	I0923 12:18:07.263439    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s4s8g
	I0923 12:18:07.263439    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.263439    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.263439    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.269141    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:07.463892    3340 request.go:632] Waited for 193.5763ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:07.463892    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:07.463892    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.463892    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.463892    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.469866    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:07.470189    3340 pod_ready.go:93] pod "kube-proxy-s4s8g" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:07.470189    3340 pod_ready.go:82] duration metric: took 401.1287ms for pod "kube-proxy-s4s8g" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.470189    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.663766    3340 request.go:632] Waited for 193.5638ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300
	I0923 12:18:07.663766    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300
	I0923 12:18:07.663766    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.663766    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.663766    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.668975    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:07.863375    3340 request.go:632] Waited for 193.5517ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:07.863375    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:07.863375    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.863897    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.863897    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.870426    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:07.871420    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:07.871500    3340 pod_ready.go:82] duration metric: took 401.2838ms for pod "kube-scheduler-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.871500    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:08.063739    3340 request.go:632] Waited for 192.1666ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m02
	I0923 12:18:08.063991    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m02
	I0923 12:18:08.063991    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.063991    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.063991    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.069332    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:08.263960    3340 request.go:632] Waited for 193.7897ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:08.263960    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:08.263960    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.263960    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.263960    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.269032    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:08.269582    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:08.269582    3340 pod_ready.go:82] duration metric: took 398.0547ms for pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:08.269582    3340 pod_ready.go:39] duration metric: took 3.1965166s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:18:08.269751    3340 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:18:08.277928    3340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:18:08.302019    3340 api_server.go:72] duration metric: took 25.0713421s to wait for apiserver process to appear ...
	I0923 12:18:08.302019    3340 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:18:08.302019    3340 api_server.go:253] Checking apiserver healthz at https://172.19.146.194:8443/healthz ...
	I0923 12:18:08.310467    3340 api_server.go:279] https://172.19.146.194:8443/healthz returned 200:
	ok
	I0923 12:18:08.310655    3340 round_trippers.go:463] GET https://172.19.146.194:8443/version
	I0923 12:18:08.310655    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.310655    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.310774    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.312478    3340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:18:08.312656    3340 api_server.go:141] control plane version: v1.31.1
	I0923 12:18:08.312697    3340 api_server.go:131] duration metric: took 10.6775ms to wait for apiserver health ...
	I0923 12:18:08.312697    3340 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:18:08.463914    3340 request.go:632] Waited for 151.1261ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:08.463914    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:08.463914    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.463914    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.463914    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.469905    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:08.477123    3340 system_pods.go:59] 17 kube-system pods found
	I0923 12:18:08.477123    3340 system_pods.go:61] "coredns-7c65d6cfc9-7jzhc" [3410fd4d-a455-48c7-a6c3-7b3af6aa50a6] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "coredns-7c65d6cfc9-kf224" [08055950-19ea-4d96-b610-ca1d025c25c2] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "etcd-ha-565300" [fa5fe799-27bb-442e-9093-70d1f91fd7f3] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "etcd-ha-565300-m02" [18c247e2-8721-4662-b8db-b9174e535412] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kindnet-gvvph" [c728d1b2-d98f-4947-a971-dca1b05ba54a] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kindnet-jcj4l" [e9f183eb-5b54-4852-a996-4b4ce9a938d9] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-apiserver-ha-565300" [89e33fd1-9346-4a7d-a6c2-37a1cc636b58] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-apiserver-ha-565300-m02" [8c350e1d-ee2d-4a80-8ed8-8140a2b2e660] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-controller-manager-ha-565300" [d4599166-8583-47c0-a3c8-dc8c28fac9a2] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-controller-manager-ha-565300-m02" [6f035dd0-acd5-4162-b0d1-f37dff03d62f] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-proxy-jzwmh" [335d0452-7c30-4fe2-b0bb-d79af97b1a2d] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-proxy-s4s8g" [85c46e0e-ab32-420e-a9b7-fee9d360c8ec] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-scheduler-ha-565300" [a9ea8c2a-bfe0-4c4d-9da8-fd3b48b518b1] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-scheduler-ha-565300-m02" [de3cea24-2ae5-4a8e-8dff-3baa6cbd136f] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-vip-ha-565300" [800f2b80-94bc-4068-86eb-95bc7d58cdd7] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-vip-ha-565300-m02" [5a2386d6-9706-4c61-9e8a-b1a39838f0f9] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "storage-provisioner" [e8126304-9d6c-4f7f-ac79-f0bbf61690b3] Running
	I0923 12:18:08.477123    3340 system_pods.go:74] duration metric: took 164.4146ms to wait for pod list to return data ...
	I0923 12:18:08.477123    3340 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:18:08.663572    3340 request.go:632] Waited for 186.4367ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:18:08.663572    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:18:08.663572    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.663572    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.663572    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.668785    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:08.669270    3340 default_sa.go:45] found service account: "default"
	I0923 12:18:08.669351    3340 default_sa.go:55] duration metric: took 192.2154ms for default service account to be created ...
	I0923 12:18:08.669422    3340 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:18:08.863842    3340 request.go:632] Waited for 194.2973ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:08.863842    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:08.863842    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.863842    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.863842    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.872004    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:18:08.878357    3340 system_pods.go:86] 17 kube-system pods found
	I0923 12:18:08.878450    3340 system_pods.go:89] "coredns-7c65d6cfc9-7jzhc" [3410fd4d-a455-48c7-a6c3-7b3af6aa50a6] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "coredns-7c65d6cfc9-kf224" [08055950-19ea-4d96-b610-ca1d025c25c2] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "etcd-ha-565300" [fa5fe799-27bb-442e-9093-70d1f91fd7f3] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "etcd-ha-565300-m02" [18c247e2-8721-4662-b8db-b9174e535412] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kindnet-gvvph" [c728d1b2-d98f-4947-a971-dca1b05ba54a] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kindnet-jcj4l" [e9f183eb-5b54-4852-a996-4b4ce9a938d9] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-apiserver-ha-565300" [89e33fd1-9346-4a7d-a6c2-37a1cc636b58] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-apiserver-ha-565300-m02" [8c350e1d-ee2d-4a80-8ed8-8140a2b2e660] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-controller-manager-ha-565300" [d4599166-8583-47c0-a3c8-dc8c28fac9a2] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-controller-manager-ha-565300-m02" [6f035dd0-acd5-4162-b0d1-f37dff03d62f] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-proxy-jzwmh" [335d0452-7c30-4fe2-b0bb-d79af97b1a2d] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-proxy-s4s8g" [85c46e0e-ab32-420e-a9b7-fee9d360c8ec] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-scheduler-ha-565300" [a9ea8c2a-bfe0-4c4d-9da8-fd3b48b518b1] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-scheduler-ha-565300-m02" [de3cea24-2ae5-4a8e-8dff-3baa6cbd136f] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-vip-ha-565300" [800f2b80-94bc-4068-86eb-95bc7d58cdd7] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-vip-ha-565300-m02" [5a2386d6-9706-4c61-9e8a-b1a39838f0f9] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "storage-provisioner" [e8126304-9d6c-4f7f-ac79-f0bbf61690b3] Running
	I0923 12:18:08.878450    3340 system_pods.go:126] duration metric: took 209.0142ms to wait for k8s-apps to be running ...
	I0923 12:18:08.878450    3340 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:18:08.890654    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:18:08.915347    3340 system_svc.go:56] duration metric: took 36.8942ms WaitForService to wait for kubelet
	I0923 12:18:08.915347    3340 kubeadm.go:582] duration metric: took 25.6846285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:18:08.915409    3340 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:18:09.063228    3340 request.go:632] Waited for 147.7537ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes
	I0923 12:18:09.063573    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes
	I0923 12:18:09.063573    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:09.063573    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:09.063573    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:09.069107    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:09.070175    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:18:09.070251    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:18:09.070251    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:18:09.070251    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:18:09.070251    3340 node_conditions.go:105] duration metric: took 154.8316ms to run NodePressure ...
	I0923 12:18:09.070251    3340 start.go:241] waiting for startup goroutines ...
	I0923 12:18:09.070325    3340 start.go:255] writing updated cluster config ...
	I0923 12:18:09.073500    3340 out.go:201] 
	I0923 12:18:09.091181    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:18:09.091411    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:18:09.100346    3340 out.go:177] * Starting "ha-565300-m03" control-plane node in "ha-565300" cluster
	I0923 12:18:09.103066    3340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:18:09.103180    3340 cache.go:56] Caching tarball of preloaded images
	I0923 12:18:09.103555    3340 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:18:09.103555    3340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:18:09.103555    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:18:09.107634    3340 start.go:360] acquireMachinesLock for ha-565300-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:18:09.107634    3340 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-565300-m03"
	I0923 12:18:09.108337    3340 start.go:93] Provisioning new machine with config: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:18:09.108475    3340 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0923 12:18:09.111519    3340 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:18:09.111519    3340 start.go:159] libmachine.API.Create for "ha-565300" (driver="hyperv")
	I0923 12:18:09.111519    3340 client.go:168] LocalClient.Create starting
	I0923 12:18:09.112535    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 12:18:09.112535    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:18:09.112535    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:18:09.113080    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 12:18:09.113147    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:18:09.113281    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:18:09.113379    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 12:18:10.806778    3340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 12:18:10.806854    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:10.806919    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 12:18:12.315363    3340 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 12:18:12.315363    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:12.315527    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:18:13.630727    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:18:13.630727    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:13.630727    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:18:16.798971    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:18:16.798971    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:16.800962    3340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:18:17.147222    3340 main.go:141] libmachine: Creating SSH key...
	I0923 12:18:17.265476    3340 main.go:141] libmachine: Creating VM...
	I0923 12:18:17.266481    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:18:19.812541    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:18:19.812541    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:19.812541    3340 main.go:141] libmachine: Using switch "Default Switch"
	I0923 12:18:19.812541    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:18:21.392940    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:18:21.393972    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:21.393972    3340 main.go:141] libmachine: Creating VHD
	I0923 12:18:21.394036    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 12:18:24.745970    3340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 572D1E1F-CF72-433A-A3B1-2FCF56C6B5B3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 12:18:24.747018    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:24.747018    3340 main.go:141] libmachine: Writing magic tar header
	I0923 12:18:24.747018    3340 main.go:141] libmachine: Writing SSH key tar header
	I0923 12:18:24.755422    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 12:18:27.655166    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:27.656236    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:27.656394    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\disk.vhd' -SizeBytes 20000MB
	I0923 12:18:29.926223    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:29.926223    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:29.926734    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-565300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0923 12:18:33.088698    3340 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-565300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 12:18:33.089494    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:33.089494    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-565300-m03 -DynamicMemoryEnabled $false
	I0923 12:18:35.048041    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:35.048322    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:35.048322    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-565300-m03 -Count 2
	I0923 12:18:36.953873    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:36.954893    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:36.954893    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-565300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\boot2docker.iso'
	I0923 12:18:39.218869    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:39.218869    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:39.219752    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-565300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\disk.vhd'
	I0923 12:18:41.528327    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:41.528327    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:41.528327    3340 main.go:141] libmachine: Starting VM...
	I0923 12:18:41.528327    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-565300-m03
	I0923 12:18:44.307967    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:44.308902    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:44.308902    3340 main.go:141] libmachine: Waiting for host to start...
	I0923 12:18:44.308902    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:18:46.280983    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:18:46.280983    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:46.280983    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:18:48.493926    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:48.493992    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:49.495241    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:18:51.475969    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:18:51.476065    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:51.476123    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:18:53.685779    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:53.685857    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:54.686645    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:18:56.619214    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:18:56.619783    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:56.619783    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:18:58.837535    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:58.837535    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:59.840742    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:01.788340    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:01.788340    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:01.789052    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:04.016339    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:19:04.016553    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:05.017603    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:06.960456    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:06.960456    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:06.960456    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:09.280732    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:09.280732    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:09.281551    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:11.246280    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:11.246392    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:11.246392    3340 machine.go:93] provisionDockerMachine start ...
	I0923 12:19:11.246392    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:13.168372    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:13.168937    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:13.168937    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:15.403239    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:15.403309    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:15.407625    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:15.417717    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:15.417717    3340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:19:15.559478    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:19:15.559478    3340 buildroot.go:166] provisioning hostname "ha-565300-m03"
	I0923 12:19:15.559671    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:17.455996    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:17.456588    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:17.456687    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:19.706647    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:19.707409    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:19.712212    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:19.712212    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:19.712212    3340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565300-m03 && echo "ha-565300-m03" | sudo tee /etc/hostname
	I0923 12:19:19.878313    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565300-m03
	
	I0923 12:19:19.878313    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:21.743553    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:21.743553    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:21.744571    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:24.002780    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:24.002780    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:24.007353    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:24.007779    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:24.007846    3340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:19:24.161469    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:19:24.161469    3340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 12:19:24.161469    3340 buildroot.go:174] setting up certificates
	I0923 12:19:24.161555    3340 provision.go:84] configureAuth start
	I0923 12:19:24.161618    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:26.033202    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:26.033202    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:26.034404    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:28.279801    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:28.280523    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:28.280603    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:30.124164    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:30.124221    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:30.124221    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:32.363027    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:32.363027    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:32.363594    3340 provision.go:143] copyHostCerts
	I0923 12:19:32.363594    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 12:19:32.363594    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:19:32.363594    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 12:19:32.364364    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 12:19:32.364982    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 12:19:32.364982    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:19:32.365507    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 12:19:32.365774    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:19:32.366371    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 12:19:32.366972    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:19:32.366972    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 12:19:32.366972    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 12:19:32.367568    3340 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-565300-m03 san=[127.0.0.1 172.19.153.80 ha-565300-m03 localhost minikube]
	I0923 12:19:32.461119    3340 provision.go:177] copyRemoteCerts
	I0923 12:19:32.468103    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:19:32.468103    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:34.327901    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:34.327901    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:34.328031    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:36.527864    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:36.528523    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:36.528698    3340 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:19:36.640009    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1715198s)
	I0923 12:19:36.640054    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 12:19:36.640385    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:19:36.688237    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 12:19:36.688237    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:19:36.731763    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 12:19:36.732162    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:19:36.774326    3340 provision.go:87] duration metric: took 12.6119202s to configureAuth
	I0923 12:19:36.774326    3340 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:19:36.774326    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:19:36.774910    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:38.595929    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:38.595929    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:38.596125    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:40.807626    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:40.807626    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:40.811685    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:40.812105    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:40.812105    3340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:19:40.951589    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:19:40.951589    3340 buildroot.go:70] root file system type: tmpfs
	I0923 12:19:40.952014    3340 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:19:40.952171    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:42.827426    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:42.828123    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:42.828256    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:45.073256    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:45.073985    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:45.077872    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:45.078346    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:45.078468    3340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.146.194"
	Environment="NO_PROXY=172.19.146.194,172.19.154.133"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:19:45.253498    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.146.194
	Environment=NO_PROXY=172.19.146.194,172.19.154.133
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:19:45.253498    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:47.117473    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:47.118469    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:47.118545    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:49.384773    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:49.384773    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:49.388813    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:49.388866    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:49.388866    3340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:19:51.540595    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:19:51.540661    3340 machine.go:96] duration metric: took 40.2915487s to provisionDockerMachine
	I0923 12:19:51.540661    3340 client.go:171] duration metric: took 1m42.4222277s to LocalClient.Create
	I0923 12:19:51.540661    3340 start.go:167] duration metric: took 1m42.4222277s to libmachine.API.Create "ha-565300"
	I0923 12:19:51.540661    3340 start.go:293] postStartSetup for "ha-565300-m03" (driver="hyperv")
	I0923 12:19:51.540749    3340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:19:51.549547    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:19:51.549547    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:53.394973    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:53.394973    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:53.395892    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:55.636999    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:55.637068    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:55.637402    3340 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:19:55.749213    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1993422s)
	I0923 12:19:55.757515    3340 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:19:55.764503    3340 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:19:55.764503    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 12:19:55.764923    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 12:19:55.765445    3340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 12:19:55.765584    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 12:19:55.774426    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:19:55.793854    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 12:19:55.837595    3340 start.go:296] duration metric: took 4.2965558s for postStartSetup
	I0923 12:19:55.841686    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:57.709548    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:57.709884    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:57.709884    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:59.944505    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:59.945506    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:59.945506    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:19:59.947235    3340 start.go:128] duration metric: took 1m50.8312785s to createHost
	I0923 12:19:59.947235    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:01.856105    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:01.856105    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:01.856759    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:04.194738    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:04.194823    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:04.198853    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:20:04.199376    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:20:04.199376    3340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:20:04.334751    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727094004.542351523
	
	I0923 12:20:04.334751    3340 fix.go:216] guest clock: 1727094004.542351523
	I0923 12:20:04.334751    3340 fix.go:229] Guest: 2024-09-23 12:20:04.542351523 +0000 UTC Remote: 2024-09-23 12:19:59.9472359 +0000 UTC m=+507.005833201 (delta=4.595115623s)
	I0923 12:20:04.334751    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:06.276301    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:06.277770    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:06.277770    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:08.558784    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:08.559931    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:08.562949    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:20:08.563579    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:20:08.563579    3340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727094004
	I0923 12:20:08.707857    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 12:20:04 UTC 2024
	
	I0923 12:20:08.707857    3340 fix.go:236] clock set: Mon Sep 23 12:20:04 UTC 2024
	 (err=<nil>)
	I0923 12:20:08.707857    3340 start.go:83] releasing machines lock for "ha-565300-m03", held for 1m59.591601s
	I0923 12:20:08.708378    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:10.620747    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:10.620747    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:10.620747    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:12.946580    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:12.946580    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:12.949414    3340 out.go:177] * Found network options:
	I0923 12:20:12.952175    3340 out.go:177]   - NO_PROXY=172.19.146.194,172.19.154.133
	W0923 12:20:12.954405    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:20:12.954405    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:20:12.956036    3340 out.go:177]   - NO_PROXY=172.19.146.194,172.19.154.133
	W0923 12:20:12.958721    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:20:12.958721    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:20:12.959772    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:20:12.959772    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:20:12.960913    3340 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 12:20:12.960913    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:12.968277    3340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:20:12.968277    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:14.944840    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:14.944939    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:14.944939    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:14.956615    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:14.956615    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:14.956615    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:17.359163    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:17.359163    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:17.360167    3340 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:20:17.386160    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:17.386330    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:17.386330    3340 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:20:17.462087    3340 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5008696s)
	W0923 12:20:17.462087    3340 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 12:20:17.478325    3340 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5097443s)
	W0923 12:20:17.478325    3340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:20:17.487385    3340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:20:17.514409    3340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:20:17.514447    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:20:17.514619    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0923 12:20:17.559304    3340 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 12:20:17.559304    3340 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 12:20:17.568053    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:20:17.601738    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:20:17.621081    3340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:20:17.630019    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:20:17.657784    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:20:17.688995    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:20:17.717493    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:20:17.745220    3340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:20:17.773855    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:20:17.801602    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:20:17.829091    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:20:17.861027    3340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:20:17.879158    3340 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:20:17.887698    3340 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:20:17.916111    3340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:20:17.940130    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:18.136023    3340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:20:18.167582    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:20:18.177232    3340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:20:18.209265    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:20:18.242269    3340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:20:18.286260    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:20:18.319668    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:20:18.355651    3340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:20:18.410807    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:20:18.435372    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:20:18.477132    3340 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:20:18.493549    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:20:18.510446    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:20:18.551428    3340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:20:18.743277    3340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:20:18.918896    3340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:20:18.919074    3340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:20:18.965505    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:19.162392    3340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:20:21.747436    3340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5848701s)
	I0923 12:20:21.759292    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:20:21.795623    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:20:21.828964    3340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:20:22.021226    3340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:20:22.216364    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:22.404607    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:20:22.443738    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:20:22.476532    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:22.678094    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:20:22.799090    3340 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:20:22.807302    3340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:20:22.817000    3340 start.go:563] Will wait 60s for crictl version
	I0923 12:20:22.825689    3340 ssh_runner.go:195] Run: which crictl
	I0923 12:20:22.840199    3340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:20:22.902477    3340 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:20:22.909207    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:20:22.946030    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:20:22.979838    3340 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:20:22.984316    3340 out.go:177]   - env NO_PROXY=172.19.146.194
	I0923 12:20:22.987777    3340 out.go:177]   - env NO_PROXY=172.19.146.194,172.19.154.133
	I0923 12:20:22.990038    3340 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 12:20:22.994890    3340 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 12:20:22.994890    3340 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 12:20:22.994890    3340 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 12:20:22.994890    3340 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 12:20:22.998413    3340 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 12:20:22.999120    3340 ip.go:214] interface addr: 172.19.144.1/20
	I0923 12:20:23.008285    3340 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 12:20:23.018330    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:20:23.039863    3340 mustload.go:65] Loading cluster: ha-565300
	I0923 12:20:23.040597    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:20:23.041140    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:20:24.958651    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:24.958651    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:24.958651    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:20:24.959748    3340 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300 for IP: 172.19.153.80
	I0923 12:20:24.959748    3340 certs.go:194] generating shared ca certs ...
	I0923 12:20:24.960270    3340 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:20:24.960611    3340 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 12:20:24.961228    3340 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 12:20:24.961228    3340 certs.go:256] generating profile certs ...
	I0923 12:20:24.961952    3340 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key
	I0923 12:20:24.961952    3340 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.ca49bce9
	I0923 12:20:24.961952    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.ca49bce9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.146.194 172.19.154.133 172.19.153.80 172.19.159.254]
	I0923 12:20:25.260128    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.ca49bce9 ...
	I0923 12:20:25.260128    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.ca49bce9: {Name:mk79814649a4720b0ca874ac6d62fb512a44243f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:20:25.261138    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.ca49bce9 ...
	I0923 12:20:25.261138    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.ca49bce9: {Name:mk47d62a2b375e625148b664ca7055bc4683018c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:20:25.261536    3340 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.ca49bce9 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt
	I0923 12:20:25.274507    3340 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.ca49bce9 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key
	I0923 12:20:25.274823    3340 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key
	I0923 12:20:25.274823    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:20:25.274823    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:20:25.274823    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:20:25.274823    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:20:25.275844    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:20:25.275844    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:20:25.275844    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:20:25.275844    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:20:25.277195    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 12:20:25.277462    3340 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 12:20:25.277559    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 12:20:25.277736    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 12:20:25.278023    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 12:20:25.278177    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 12:20:25.278177    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 12:20:25.278177    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:20:25.278718    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 12:20:25.278899    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 12:20:25.279020    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:20:27.200659    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:27.200659    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:27.200994    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:29.492497    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:20:29.492497    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:29.493314    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:20:29.584645    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:20:29.592601    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:20:29.619190    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:20:29.627345    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 12:20:29.655125    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:20:29.661295    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:20:29.688364    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:20:29.694472    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:20:29.720902    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:20:29.727529    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:20:29.755424    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:20:29.765943    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0923 12:20:29.788698    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:20:29.837699    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 12:20:29.884681    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:20:29.926994    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:20:29.978505    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0923 12:20:30.028613    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:20:30.075784    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:20:30.118699    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:20:30.162808    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:20:30.206365    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 12:20:30.251993    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 12:20:30.295181    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:20:30.323164    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 12:20:30.351814    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:20:30.381466    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:20:30.411508    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:20:30.443795    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0923 12:20:30.476591    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:20:30.514443    3340 ssh_runner.go:195] Run: openssl version
	I0923 12:20:30.532198    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:20:30.560819    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:20:30.566835    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:20:30.575107    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:20:30.591982    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:20:30.620457    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 12:20:30.649748    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 12:20:30.656894    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 12:20:30.665713    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 12:20:30.682069    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 12:20:30.711829    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 12:20:30.741972    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 12:20:30.749291    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 12:20:30.761554    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 12:20:30.779799    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:20:30.810134    3340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:20:30.816949    3340 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:20:30.817169    3340 kubeadm.go:934] updating node {m03 172.19.153.80 8443 v1.31.1 docker true true} ...
	I0923 12:20:30.817169    3340 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.153.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:20:30.817169    3340 kube-vip.go:115] generating kube-vip config ...
	I0923 12:20:30.825298    3340 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:20:30.857390    3340 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:20:30.857390    3340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:20:30.867663    3340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:20:30.889150    3340 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:20:30.898124    3340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:20:30.914886    3340 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 12:20:30.914886    3340 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 12:20:30.914886    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:20:30.914886    3340 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 12:20:30.917181    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:20:30.928069    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:20:30.928297    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:20:30.931276    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:20:30.954476    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:20:30.954476    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:20:30.954758    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:20:30.954758    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:20:30.955106    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:20:30.963680    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:20:31.014734    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:20:31.016110    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:20:31.898713    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:20:31.915394    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:20:31.947672    3340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:20:31.979750    3340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:20:32.023381    3340 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:20:32.029193    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:20:32.064143    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:32.261236    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:20:32.289266    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:20:32.289266    3340 start.go:317] joinCluster: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.19.153.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:20:32.289266    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:20:32.289266    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:20:34.164421    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:34.164421    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:34.165346    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:36.443209    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:20:36.444169    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:36.444169    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:20:36.633203    3340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3428791s)
	I0923 12:20:36.633302    3340 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.19.153.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:20:36.633302    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdy155.vq6ux4r409f7wy9t --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-565300-m03 --control-plane --apiserver-advertise-address=172.19.153.80 --apiserver-bind-port=8443"
	I0923 12:21:19.613533    3340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdy155.vq6ux4r409f7wy9t --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-565300-m03 --control-plane --apiserver-advertise-address=172.19.153.80 --apiserver-bind-port=8443": (42.9766738s)
	I0923 12:21:19.613533    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:21:20.430882    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565300-m03 minikube.k8s.io/updated_at=2024_09_23T12_21_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-565300 minikube.k8s.io/primary=false
	I0923 12:21:20.591665    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565300-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:21:20.733285    3340 start.go:319] duration metric: took 48.4407491s to joinCluster
	I0923 12:21:20.734266    3340 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.19.153.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:21:20.734266    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:21:20.737832    3340 out.go:177] * Verifying Kubernetes components...
	I0923 12:21:20.747598    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:21:21.094640    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:21:21.124386    3340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:21:21.125314    3340 kapi.go:59] client config for ha-565300: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:21:21.125447    3340 kubeadm.go:483] Overriding stale ClientConfig host https://172.19.159.254:8443 with https://172.19.146.194:8443
	I0923 12:21:21.126524    3340 node_ready.go:35] waiting up to 6m0s for node "ha-565300-m03" to be "Ready" ...
	I0923 12:21:21.126789    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:21.126789    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:21.126886    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:21.126886    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:21.139434    3340 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0923 12:21:21.627311    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:21.627434    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:21.627434    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:21.627434    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:21.635818    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:21:22.126890    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:22.126890    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:22.126890    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:22.126890    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:22.134370    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:22.627425    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:22.627425    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:22.627425    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:22.627425    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:22.630482    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:23.127788    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:23.127853    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:23.127853    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:23.127853    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:23.131871    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:23.132537    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:23.627172    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:23.627172    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:23.627172    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:23.627172    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:23.637231    3340 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 12:21:24.127625    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:24.127648    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:24.127648    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:24.127648    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:24.469571    3340 round_trippers.go:574] Response Status: 200 OK in 341 milliseconds
	I0923 12:21:24.627882    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:24.627882    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:24.627882    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:24.627882    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:24.632614    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:25.127101    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:25.127101    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:25.127101    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:25.127101    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:25.131995    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:25.132862    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:25.628001    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:25.628001    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:25.628001    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:25.628001    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:25.656672    3340 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0923 12:21:26.128280    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:26.128358    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:26.128358    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:26.128406    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:26.136136    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:26.627578    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:26.627578    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:26.627578    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:26.627578    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:26.632240    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:27.127798    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:27.127798    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:27.127798    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:27.127798    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:27.134986    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:27.136056    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:27.627285    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:27.627285    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:27.627285    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:27.627285    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:27.631749    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:28.127870    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:28.127870    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:28.127870    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:28.127870    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:28.133034    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:28.628335    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:28.628400    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:28.628400    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:28.628400    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:28.632366    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:29.127512    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:29.127512    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:29.127512    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:29.127512    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:29.624094    3340 round_trippers.go:574] Response Status: 200 OK in 496 milliseconds
	I0923 12:21:29.625013    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:29.627426    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:29.627426    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:29.627426    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:29.627426    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:29.631853    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:30.128409    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:30.128409    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:30.128409    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:30.128409    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:30.780632    3340 round_trippers.go:574] Response Status: 200 OK in 652 milliseconds
	I0923 12:21:30.781645    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:30.781645    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:30.781645    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:30.781645    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:30.786281    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:31.128084    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:31.128084    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:31.128084    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:31.128084    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:31.133795    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:31.628046    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:31.628046    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:31.628046    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:31.628046    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:31.632655    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:31.633276    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:32.127988    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:32.127988    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:32.127988    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:32.127988    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:32.273508    3340 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I0923 12:21:32.627592    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:32.627592    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:32.627592    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:32.627592    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:32.631523    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:33.128697    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:33.128697    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:33.128697    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:33.128697    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:33.270635    3340 round_trippers.go:574] Response Status: 200 OK in 141 milliseconds
	I0923 12:21:33.628459    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:33.628459    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:33.628459    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:33.628459    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:33.633200    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:33.633760    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:34.128560    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:34.128560    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:34.128631    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:34.128631    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:34.135154    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:21:34.629108    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:34.629108    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:34.629108    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:34.629108    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:34.663927    3340 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0923 12:21:35.128561    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:35.128561    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:35.128561    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:35.128561    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:35.136589    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:21:35.628443    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:35.628844    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:35.628844    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:35.628844    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:35.633197    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:35.634146    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:36.128413    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:36.128413    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:36.128413    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:36.128413    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:36.229842    3340 round_trippers.go:574] Response Status: 200 OK in 101 milliseconds
	I0923 12:21:36.628226    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:36.628226    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:36.628226    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:36.628226    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:37.676753    3340 round_trippers.go:574] Response Status: 200 OK in 1048 milliseconds
	I0923 12:21:37.677028    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:37.677028    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:37.677028    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:37.677028    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:37.677028    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:37.686987    3340 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 12:21:38.128494    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:38.128494    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:38.128494    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:38.128494    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:38.133491    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:38.628965    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:38.628965    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:38.628965    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:38.628965    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:39.163544    3340 round_trippers.go:574] Response Status: 200 OK in 534 milliseconds
	I0923 12:21:39.164124    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:39.164124    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:39.164124    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:39.164124    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:39.186366    3340 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0923 12:21:39.628621    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:39.629133    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:39.629133    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:39.629133    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:40.154047    3340 round_trippers.go:574] Response Status: 200 OK in 524 milliseconds
	I0923 12:21:40.154987    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:40.155160    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:40.155160    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:40.155160    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:40.155160    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:40.160544    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:40.629380    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:40.629380    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:40.629447    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:40.629447    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:40.634154    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:41.128356    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:41.128918    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:41.128918    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:41.128918    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:41.133449    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:41.628866    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:41.628866    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:41.628866    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:41.628866    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:41.632064    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:42.128548    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:42.128548    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:42.128548    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:42.128548    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:42.177356    3340 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0923 12:21:42.178218    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:42.628237    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:42.628237    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:42.628237    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:42.628237    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:42.632341    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.129542    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:43.129542    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.129542    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.129542    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.134363    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.135703    3340 node_ready.go:49] node "ha-565300-m03" has status "Ready":"True"
	I0923 12:21:43.135703    3340 node_ready.go:38] duration metric: took 22.0075913s for node "ha-565300-m03" to be "Ready" ...
	I0923 12:21:43.135759    3340 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:21:43.135842    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:21:43.135905    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.135905    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.135905    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.146535    3340 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 12:21:43.155412    3340 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.155412    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jzhc
	I0923 12:21:43.155412    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.155412    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.155412    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.160015    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.161602    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:43.161602    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.161602    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.161602    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.169631    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:43.170593    3340 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:43.170593    3340 pod_ready.go:82] duration metric: took 15.1801ms for pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.170593    3340 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.170593    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kf224
	I0923 12:21:43.170593    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.170593    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.170593    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.174351    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:43.176050    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:43.176109    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.176109    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.176165    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.180395    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.180395    3340 pod_ready.go:93] pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:43.180395    3340 pod_ready.go:82] duration metric: took 9.8012ms for pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.180395    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.181393    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300
	I0923 12:21:43.181393    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.181393    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.181393    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.184019    3340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:21:43.185030    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:43.185030    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.185030    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.185030    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.190484    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:43.190744    3340 pod_ready.go:93] pod "etcd-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:43.190744    3340 pod_ready.go:82] duration metric: took 10.3476ms for pod "etcd-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.190744    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.190744    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300-m02
	I0923 12:21:43.190744    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.190744    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.190744    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.195973    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.196025    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:43.196025    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.196025    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.196571    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.203825    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:43.204537    3340 pod_ready.go:93] pod "etcd-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:43.204573    3340 pod_ready.go:82] duration metric: took 13.8288ms for pod "etcd-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.204573    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.329771    3340 request.go:632] Waited for 125.113ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300-m03
	I0923 12:21:43.329771    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300-m03
	I0923 12:21:43.329771    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.329771    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.329771    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.334280    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.529806    3340 request.go:632] Waited for 194.3407ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:43.529806    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:43.529806    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.529806    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.529806    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.140592    3340 round_trippers.go:574] Response Status: 200 OK in 610 milliseconds
	I0923 12:21:44.140724    3340 pod_ready.go:93] pod "etcd-ha-565300-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:44.140724    3340 pod_ready.go:82] duration metric: took 936.0875ms for pod "etcd-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.141260    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.141448    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300
	I0923 12:21:44.141473    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.141501    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.141501    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.146748    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:44.148764    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:44.148840    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.148840    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.148840    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.152173    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:44.152808    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:44.152843    3340 pod_ready.go:82] duration metric: took 11.5823ms for pod "kube-apiserver-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.152884    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.152963    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m02
	I0923 12:21:44.153005    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.153041    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.153041    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.156958    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:44.330808    3340 request.go:632] Waited for 173.8386ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:44.330808    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:44.330808    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.330808    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.330808    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.335395    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:44.336569    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:44.336625    3340 pod_ready.go:82] duration metric: took 183.7286ms for pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.336625    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.529942    3340 request.go:632] Waited for 193.1916ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m03
	I0923 12:21:44.529942    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m03
	I0923 12:21:44.529942    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.529942    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.529942    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.534860    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:44.730499    3340 request.go:632] Waited for 194.7489ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:44.730499    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:44.730499    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.730499    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.730499    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.735616    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:44.736389    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:44.736389    3340 pod_ready.go:82] duration metric: took 399.6809ms for pod "kube-apiserver-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.736389    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.930285    3340 request.go:632] Waited for 193.8057ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300
	I0923 12:21:44.930285    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300
	I0923 12:21:44.930285    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.930285    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.930285    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.935962    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:45.130510    3340 request.go:632] Waited for 193.6585ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:45.130885    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:45.130995    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.130995    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.130995    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.135291    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:45.136385    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:45.136450    3340 pod_ready.go:82] duration metric: took 400.034ms for pod "kube-controller-manager-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.136450    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.330843    3340 request.go:632] Waited for 194.3155ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m02
	I0923 12:21:45.330843    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m02
	I0923 12:21:45.330843    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.330843    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.330843    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.338488    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:45.530131    3340 request.go:632] Waited for 190.3373ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:45.530131    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:45.530131    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.530131    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.530131    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.542151    3340 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0923 12:21:45.543026    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:45.543026    3340 pod_ready.go:82] duration metric: took 406.5494ms for pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.543026    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.730446    3340 request.go:632] Waited for 187.4073ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m03
	I0923 12:21:45.730446    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m03
	I0923 12:21:45.730446    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.730446    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.730446    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.736091    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:45.930080    3340 request.go:632] Waited for 192.8022ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:45.930080    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:45.930080    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.930080    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.930080    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.936069    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:45.936798    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:45.936865    3340 pod_ready.go:82] duration metric: took 393.8121ms for pod "kube-controller-manager-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.936923    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9fdqn" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.130038    3340 request.go:632] Waited for 193.0438ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fdqn
	I0923 12:21:46.130038    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fdqn
	I0923 12:21:46.130038    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.130038    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.130038    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.136063    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:46.330656    3340 request.go:632] Waited for 194.1853ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:46.330997    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:46.330997    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.330997    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.330997    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.334682    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:46.335481    3340 pod_ready.go:93] pod "kube-proxy-9fdqn" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:46.335481    3340 pod_ready.go:82] duration metric: took 398.5311ms for pod "kube-proxy-9fdqn" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.335481    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzwmh" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.529870    3340 request.go:632] Waited for 194.2332ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzwmh
	I0923 12:21:46.530259    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzwmh
	I0923 12:21:46.530259    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.530259    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.530259    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.537100    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:21:46.729871    3340 request.go:632] Waited for 191.1244ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:46.730225    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:46.730293    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.730293    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.730293    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.738454    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:46.738578    3340 pod_ready.go:93] pod "kube-proxy-jzwmh" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:46.738578    3340 pod_ready.go:82] duration metric: took 403.0696ms for pod "kube-proxy-jzwmh" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.738578    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s4s8g" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.930002    3340 request.go:632] Waited for 191.4113ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s4s8g
	I0923 12:21:46.930002    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s4s8g
	I0923 12:21:46.930510    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.930510    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.930510    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.935554    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:47.130057    3340 request.go:632] Waited for 193.3999ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:47.130057    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:47.130057    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.130057    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.130057    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.138658    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:21:47.139609    3340 pod_ready.go:93] pod "kube-proxy-s4s8g" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:47.139609    3340 pod_ready.go:82] duration metric: took 401.0039ms for pod "kube-proxy-s4s8g" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.139609    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.330035    3340 request.go:632] Waited for 190.4134ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300
	I0923 12:21:47.330035    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300
	I0923 12:21:47.330035    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.330035    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.330035    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.337612    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:47.531179    3340 request.go:632] Waited for 192.7814ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:47.531179    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:47.531179    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.531179    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.531179    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.538448    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:47.539473    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:47.539530    3340 pod_ready.go:82] duration metric: took 399.8377ms for pod "kube-scheduler-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.539530    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.730673    3340 request.go:632] Waited for 190.9754ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m02
	I0923 12:21:47.730673    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m02
	I0923 12:21:47.730673    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.730673    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.730673    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.736033    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:47.930501    3340 request.go:632] Waited for 193.3078ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:47.930867    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:47.930867    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.930867    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.930867    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.936657    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:47.937635    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:47.937699    3340 pod_ready.go:82] duration metric: took 398.1424ms for pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.937699    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:48.130812    3340 request.go:632] Waited for 192.9783ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m03
	I0923 12:21:48.130812    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m03
	I0923 12:21:48.130812    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.130812    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.130812    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.293270    3340 round_trippers.go:574] Response Status: 200 OK in 162 milliseconds
	I0923 12:21:48.331110    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:48.331307    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.331307    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.331307    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.391571    3340 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0923 12:21:48.392701    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:48.392701    3340 pod_ready.go:82] duration metric: took 454.9152ms for pod "kube-scheduler-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:48.392769    3340 pod_ready.go:39] duration metric: took 5.2566558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:21:48.392837    3340 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:21:48.402638    3340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:21:48.430037    3340 api_server.go:72] duration metric: took 27.6939011s to wait for apiserver process to appear ...
	I0923 12:21:48.430103    3340 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:21:48.430170    3340 api_server.go:253] Checking apiserver healthz at https://172.19.146.194:8443/healthz ...
	I0923 12:21:48.438019    3340 api_server.go:279] https://172.19.146.194:8443/healthz returned 200:
	ok
	I0923 12:21:48.438160    3340 round_trippers.go:463] GET https://172.19.146.194:8443/version
	I0923 12:21:48.438176    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.438176    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.438176    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.439418    3340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:21:48.439581    3340 api_server.go:141] control plane version: v1.31.1
	I0923 12:21:48.439620    3340 api_server.go:131] duration metric: took 9.4105ms to wait for apiserver health ...
	I0923 12:21:48.439620    3340 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:21:48.530592    3340 request.go:632] Waited for 90.7524ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:21:48.530592    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:21:48.530592    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.530592    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.530592    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.873468    3340 round_trippers.go:574] Response Status: 200 OK in 342 milliseconds
	I0923 12:21:48.883799    3340 system_pods.go:59] 24 kube-system pods found
	I0923 12:21:48.883799    3340 system_pods.go:61] "coredns-7c65d6cfc9-7jzhc" [3410fd4d-a455-48c7-a6c3-7b3af6aa50a6] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "coredns-7c65d6cfc9-kf224" [08055950-19ea-4d96-b610-ca1d025c25c2] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "etcd-ha-565300" [fa5fe799-27bb-442e-9093-70d1f91fd7f3] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "etcd-ha-565300-m02" [18c247e2-8721-4662-b8db-b9174e535412] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "etcd-ha-565300-m03" [02e5f7e1-6097-482b-9c7f-d6a806858da2] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kindnet-gvvph" [c728d1b2-d98f-4947-a971-dca1b05ba54a] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kindnet-j45vw" [2bc2bb0f-f609-4780-a13e-3c0d3b8f20d7] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kindnet-jcj4l" [e9f183eb-5b54-4852-a996-4b4ce9a938d9] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-apiserver-ha-565300" [89e33fd1-9346-4a7d-a6c2-37a1cc636b58] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-apiserver-ha-565300-m02" [8c350e1d-ee2d-4a80-8ed8-8140a2b2e660] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-apiserver-ha-565300-m03" [639ce30d-84fa-4bb1-a0c9-52a8dc896100] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-controller-manager-ha-565300" [d4599166-8583-47c0-a3c8-dc8c28fac9a2] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-controller-manager-ha-565300-m02" [6f035dd0-acd5-4162-b0d1-f37dff03d62f] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-controller-manager-ha-565300-m03" [345dc9c1-d760-4ea8-90f1-62934babffe9] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-proxy-9fdqn" [de0503b5-3ec6-4d2f-bb9a-b8f670c1abcd] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-proxy-jzwmh" [335d0452-7c30-4fe2-b0bb-d79af97b1a2d] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-proxy-s4s8g" [85c46e0e-ab32-420e-a9b7-fee9d360c8ec] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-scheduler-ha-565300" [a9ea8c2a-bfe0-4c4d-9da8-fd3b48b518b1] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-scheduler-ha-565300-m02" [de3cea24-2ae5-4a8e-8dff-3baa6cbd136f] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-scheduler-ha-565300-m03" [305c9f7d-70a4-4a9f-b50d-5cdedfcd204b] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-vip-ha-565300" [800f2b80-94bc-4068-86eb-95bc7d58cdd7] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-vip-ha-565300-m02" [5a2386d6-9706-4c61-9e8a-b1a39838f0f9] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-vip-ha-565300-m03" [757fd58d-0e45-4408-9832-027591ab9d09] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "storage-provisioner" [e8126304-9d6c-4f7f-ac79-f0bbf61690b3] Running
	I0923 12:21:48.883799    3340 system_pods.go:74] duration metric: took 444.1482ms to wait for pod list to return data ...
	I0923 12:21:48.883799    3340 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:21:48.883799    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:21:48.883799    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.883799    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.883799    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.889657    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:48.889850    3340 default_sa.go:45] found service account: "default"
	I0923 12:21:48.889850    3340 default_sa.go:55] duration metric: took 6.0508ms for default service account to be created ...
	I0923 12:21:48.889900    3340 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:21:48.930213    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:21:48.930213    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.930213    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.930213    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.969663    3340 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0923 12:21:48.979299    3340 system_pods.go:86] 24 kube-system pods found
	I0923 12:21:48.979395    3340 system_pods.go:89] "coredns-7c65d6cfc9-7jzhc" [3410fd4d-a455-48c7-a6c3-7b3af6aa50a6] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "coredns-7c65d6cfc9-kf224" [08055950-19ea-4d96-b610-ca1d025c25c2] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "etcd-ha-565300" [fa5fe799-27bb-442e-9093-70d1f91fd7f3] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "etcd-ha-565300-m02" [18c247e2-8721-4662-b8db-b9174e535412] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "etcd-ha-565300-m03" [02e5f7e1-6097-482b-9c7f-d6a806858da2] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kindnet-gvvph" [c728d1b2-d98f-4947-a971-dca1b05ba54a] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kindnet-j45vw" [2bc2bb0f-f609-4780-a13e-3c0d3b8f20d7] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kindnet-jcj4l" [e9f183eb-5b54-4852-a996-4b4ce9a938d9] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-apiserver-ha-565300" [89e33fd1-9346-4a7d-a6c2-37a1cc636b58] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-apiserver-ha-565300-m02" [8c350e1d-ee2d-4a80-8ed8-8140a2b2e660] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-apiserver-ha-565300-m03" [639ce30d-84fa-4bb1-a0c9-52a8dc896100] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-controller-manager-ha-565300" [d4599166-8583-47c0-a3c8-dc8c28fac9a2] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-controller-manager-ha-565300-m02" [6f035dd0-acd5-4162-b0d1-f37dff03d62f] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-controller-manager-ha-565300-m03" [345dc9c1-d760-4ea8-90f1-62934babffe9] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-proxy-9fdqn" [de0503b5-3ec6-4d2f-bb9a-b8f670c1abcd] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-proxy-jzwmh" [335d0452-7c30-4fe2-b0bb-d79af97b1a2d] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-proxy-s4s8g" [85c46e0e-ab32-420e-a9b7-fee9d360c8ec] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-scheduler-ha-565300" [a9ea8c2a-bfe0-4c4d-9da8-fd3b48b518b1] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-scheduler-ha-565300-m02" [de3cea24-2ae5-4a8e-8dff-3baa6cbd136f] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-scheduler-ha-565300-m03" [305c9f7d-70a4-4a9f-b50d-5cdedfcd204b] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-vip-ha-565300" [800f2b80-94bc-4068-86eb-95bc7d58cdd7] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-vip-ha-565300-m02" [5a2386d6-9706-4c61-9e8a-b1a39838f0f9] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-vip-ha-565300-m03" [757fd58d-0e45-4408-9832-027591ab9d09] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "storage-provisioner" [e8126304-9d6c-4f7f-ac79-f0bbf61690b3] Running
	I0923 12:21:48.979395    3340 system_pods.go:126] duration metric: took 89.489ms to wait for k8s-apps to be running ...
	I0923 12:21:48.979395    3340 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:21:48.987931    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:21:49.014481    3340 system_svc.go:56] duration metric: took 35.0832ms WaitForService to wait for kubelet
	I0923 12:21:49.014554    3340 kubeadm.go:582] duration metric: took 28.278379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:21:49.014625    3340 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:21:49.130252    3340 request.go:632] Waited for 115.5168ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes
	I0923 12:21:49.130252    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes
	I0923 12:21:49.130252    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:49.130252    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:49.130252    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:49.136945    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:21:49.138100    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:21:49.138100    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:21:49.138213    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:21:49.138213    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:21:49.138213    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:21:49.138213    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:21:49.138213    3340 node_conditions.go:105] duration metric: took 123.5791ms to run NodePressure ...
	I0923 12:21:49.138213    3340 start.go:241] waiting for startup goroutines ...
	I0923 12:21:49.138213    3340 start.go:255] writing updated cluster config ...
	I0923 12:21:49.148042    3340 ssh_runner.go:195] Run: rm -f paused
	I0923 12:21:49.278631    3340 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:21:49.290159    3340 out.go:177] * Done! kubectl is now configured to use "ha-565300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.695698636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.723372777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.723458482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.723488084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.723649994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:45 ha-565300 cri-dockerd[1321]: time="2024-09-23T12:14:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8c05c72015312af8a6c4b368cb2fd302186faa02e1caa119729602e1027f3ad/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 12:14:45 ha-565300 cri-dockerd[1321]: time="2024-09-23T12:14:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec96b961c47200351c60f916faeae6e6d01781fb1659afec1103dd2255fa789d/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.103590014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.103764825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.103786727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.105837158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.153717627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.153851235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.153874737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.154009446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:22:32 ha-565300 dockerd[1429]: time="2024-09-23T12:22:32.343090308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:22:32 ha-565300 dockerd[1429]: time="2024-09-23T12:22:32.343430028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:22:32 ha-565300 dockerd[1429]: time="2024-09-23T12:22:32.343474331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:22:32 ha-565300 dockerd[1429]: time="2024-09-23T12:22:32.343634640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:22:32 ha-565300 cri-dockerd[1321]: time="2024-09-23T12:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/94d7ba7dd4e11e602b396a5754f5a9c0a4d8b23595aafe2181de568836040596/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 12:22:35 ha-565300 cri-dockerd[1321]: time="2024-09-23T12:22:35Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Sep 23 12:22:37 ha-565300 dockerd[1429]: time="2024-09-23T12:22:37.015010228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:22:37 ha-565300 dockerd[1429]: time="2024-09-23T12:22:37.015142337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:22:37 ha-565300 dockerd[1429]: time="2024-09-23T12:22:37.015161338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:22:37 ha-565300 dockerd[1429]: time="2024-09-23T12:22:37.015274745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ff23db9d03c23       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   94d7ba7dd4e11       busybox-7dff88458-rjg7r
	21587833455a5       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   ec96b961c4720       storage-provisioner
	3913e82ea5d64       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   a8c05c7201531       coredns-7c65d6cfc9-7jzhc
	9e936da45f9fc       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   b694930c61f03       coredns-7c65d6cfc9-kf224
	ec009d58ec024       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              9 minutes ago        Running             kindnet-cni               0                   581d1866dc0e1       kindnet-gvvph
	5a8e37d9bdb76       60c005f310ff3                                                                                         9 minutes ago        Running             kube-proxy                0                   ada4b7396f1f9       kube-proxy-s4s8g
	e04d5fa3131b0       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     9 minutes ago        Running             kube-vip                  0                   0e4f892d50a24       kube-vip-ha-565300
	6557cb9820342       2e96e5913fc06                                                                                         10 minutes ago       Running             etcd                      0                   f17f48f36f54b       etcd-ha-565300
	bb14fd3d1b742       175ffd71cce3d                                                                                         10 minutes ago       Running             kube-controller-manager   0                   d5c4129b72c11       kube-controller-manager-ha-565300
	3c9ae68aa117b       9aa1fad941575                                                                                         10 minutes ago       Running             kube-scheduler            0                   9a0b7e2df2fe3       kube-scheduler-ha-565300
	d6fe896ee937c       6bab7719df100                                                                                         10 minutes ago       Running             kube-apiserver            0                   4ac1baf148601       kube-apiserver-ha-565300
	
	
	==> coredns [3913e82ea5d6] <==
	[INFO] 10.244.2.3:38504 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.14862568s
	[INFO] 10.244.0.4:56004 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174112s
	[INFO] 10.244.0.4:52799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.01414357s
	[INFO] 10.244.1.2:36426 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000096507s
	[INFO] 10.244.2.3:51155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000324522s
	[INFO] 10.244.2.3:36383 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.048859679s
	[INFO] 10.244.2.3:53302 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170012s
	[INFO] 10.244.2.3:43083 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173512s
	[INFO] 10.244.0.4:56969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000166811s
	[INFO] 10.244.0.4:36041 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.110350104s
	[INFO] 10.244.0.4:40805 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00028752s
	[INFO] 10.244.0.4:36040 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014601s
	[INFO] 10.244.1.2:43033 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207914s
	[INFO] 10.244.1.2:35421 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000201613s
	[INFO] 10.244.1.2:53463 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162011s
	[INFO] 10.244.1.2:41559 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164811s
	[INFO] 10.244.1.2:59905 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173312s
	[INFO] 10.244.2.3:46533 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105507s
	[INFO] 10.244.0.4:33331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135609s
	[INFO] 10.244.0.4:51753 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014631s
	[INFO] 10.244.1.2:38901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117208s
	[INFO] 10.244.2.3:59701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199213s
	[INFO] 10.244.0.4:42855 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000320521s
	[INFO] 10.244.0.4:46554 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00014041s
	[INFO] 10.244.1.2:36654 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000189313s
	
	
	==> coredns [9e936da45f9f] <==
	[INFO] 10.244.2.3:50668 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014649483s
	[INFO] 10.244.2.3:58314 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000228015s
	[INFO] 10.244.0.4:37445 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267718s
	[INFO] 10.244.0.4:55085 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203414s
	[INFO] 10.244.0.4:42792 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117908s
	[INFO] 10.244.0.4:38076 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000277418s
	[INFO] 10.244.1.2:50453 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245016s
	[INFO] 10.244.1.2:48448 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000080205s
	[INFO] 10.244.1.2:48024 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014181s
	[INFO] 10.244.2.3:50673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134209s
	[INFO] 10.244.2.3:33924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137709s
	[INFO] 10.244.2.3:56280 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097907s
	[INFO] 10.244.0.4:41015 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131709s
	[INFO] 10.244.0.4:57270 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083506s
	[INFO] 10.244.1.2:56697 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091006s
	[INFO] 10.244.1.2:59874 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179412s
	[INFO] 10.244.1.2:51098 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198014s
	[INFO] 10.244.2.3:46102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152611s
	[INFO] 10.244.2.3:42225 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118308s
	[INFO] 10.244.2.3:53183 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123209s
	[INFO] 10.244.0.4:51947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247417s
	[INFO] 10.244.0.4:46586 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000189712s
	[INFO] 10.244.1.2:50141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181312s
	[INFO] 10.244.1.2:52940 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134009s
	[INFO] 10.244.1.2:41234 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108707s
	
	
	==> describe nodes <==
	Name:               ha-565300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-565300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_14_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:14:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:24:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:22:57 +0000   Mon, 23 Sep 2024 12:14:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:22:57 +0000   Mon, 23 Sep 2024 12:14:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:22:57 +0000   Mon, 23 Sep 2024 12:14:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:22:57 +0000   Mon, 23 Sep 2024 12:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.146.194
	  Hostname:    ha-565300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 277e2ef6a1034548ba796628eeb28a0c
	  System UUID:                c6a5291c-50da-454e-ae27-77fb67747768
	  Boot ID:                    a3f90f42-719a-4941-8f49-77af7d69f6fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rjg7r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 coredns-7c65d6cfc9-7jzhc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m49s
	  kube-system                 coredns-7c65d6cfc9-kf224             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m49s
	  kube-system                 etcd-ha-565300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m55s
	  kube-system                 kindnet-gvvph                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m49s
	  kube-system                 kube-apiserver-ha-565300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-controller-manager-ha-565300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-proxy-s4s8g                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 kube-scheduler-ha-565300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-vip-ha-565300                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m48s  kube-proxy       
	  Normal  Starting                 9m55s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m55s  kubelet          Node ha-565300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m55s  kubelet          Node ha-565300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m55s  kubelet          Node ha-565300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m51s  node-controller  Node ha-565300 event: Registered Node ha-565300 in Controller
	  Normal  NodeReady                9m28s  kubelet          Node ha-565300 status is now: NodeReady
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-565300 event: Registered Node ha-565300 in Controller
	  Normal  RegisteredNode           2m47s  node-controller  Node ha-565300 event: Registered Node ha-565300 in Controller
	
	
	Name:               ha-565300-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-565300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_17_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:17:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:24:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:22:43 +0000   Mon, 23 Sep 2024 12:17:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:22:43 +0000   Mon, 23 Sep 2024 12:17:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:22:43 +0000   Mon, 23 Sep 2024 12:17:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:22:43 +0000   Mon, 23 Sep 2024 12:18:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.154.133
	  Hostname:    ha-565300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9cf6ce84e674600883910dd751f04ef
	  System UUID:                426d5aa4-7fc6-4a4b-8233-6561accfd3ed
	  Boot ID:                    f4fbe51f-d1ad-482c-b5a1-2346cd2181ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x4chx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 etcd-ha-565300-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m31s
	  kube-system                 kindnet-jcj4l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m35s
	  kube-system                 kube-apiserver-ha-565300-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-ha-565300-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-proxy-jzwmh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-scheduler-ha-565300-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-vip-ha-565300-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node ha-565300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node ha-565300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node ha-565300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-565300-m02 event: Registered Node ha-565300-m02 in Controller
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-565300-m02 event: Registered Node ha-565300-m02 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-565300-m02 event: Registered Node ha-565300-m02 in Controller
	
	
	Name:               ha-565300-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-565300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_21_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:21:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:24:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:22:46 +0000   Mon, 23 Sep 2024 12:21:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:22:46 +0000   Mon, 23 Sep 2024 12:21:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:22:46 +0000   Mon, 23 Sep 2024 12:21:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:22:46 +0000   Mon, 23 Sep 2024 12:21:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.153.80
	  Hostname:    ha-565300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0f3c71bd3647bcbec4b56c1efbbcf7
	  System UUID:                267aef2d-fc53-c64f-8edf-0d874d3b3472
	  Boot ID:                    9c536902-5ca1-4323-91fd-b411caa4957e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45cpz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 etcd-ha-565300-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m54s
	  kube-system                 kindnet-j45vw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m58s
	  kube-system                 kube-apiserver-ha-565300-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 kube-controller-manager-ha-565300-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 kube-proxy-9fdqn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  kube-system                 kube-scheduler-ha-565300-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m54s
	  kube-system                 kube-vip-ha-565300-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m58s (x8 over 2m58s)  kubelet          Node ha-565300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x8 over 2m58s)  kubelet          Node ha-565300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x7 over 2m58s)  kubelet          Node ha-565300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m55s                  node-controller  Node ha-565300-m03 event: Registered Node ha-565300-m03 in Controller
	  Normal  RegisteredNode           2m54s                  node-controller  Node ha-565300-m03 event: Registered Node ha-565300-m03 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-565300-m03 event: Registered Node ha-565300-m03 in Controller
	
	
	==> dmesg <==
	[  +1.341465] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.288041] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep23 12:13] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.151887] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[ +27.199041] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.078626] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.476615] systemd-fstab-generator[1033]: Ignoring "noauto" option for root device
	[  +0.169199] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.213134] systemd-fstab-generator[1059]: Ignoring "noauto" option for root device
	[  +2.785501] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.184487] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.187603] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.241356] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[ +10.813058] systemd-fstab-generator[1415]: Ignoring "noauto" option for root device
	[  +0.098752] kauditd_printk_skb: 202 callbacks suppressed
	[Sep23 12:14] systemd-fstab-generator[1670]: Ignoring "noauto" option for root device
	[  +5.089158] systemd-fstab-generator[1812]: Ignoring "noauto" option for root device
	[  +0.087210] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.080182] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.945235] systemd-fstab-generator[2307]: Ignoring "noauto" option for root device
	[  +6.768468] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.374632] kauditd_printk_skb: 29 callbacks suppressed
	[Sep23 12:17] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6557cb982034] <==
	{"level":"info","ts":"2024-09-23T12:23:07.119966Z","caller":"traceutil/trace.go:171","msg":"trace[1496020474] transaction","detail":"{read_only:false; response_revision:1843; number_of_response:1; }","duration":"247.079016ms","start":"2024-09-23T12:23:06.872873Z","end":"2024-09-23T12:23:07.119952Z","steps":["trace[1496020474] 'process raft request'  (duration: 246.982409ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:23:07.124321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.932493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-09-23T12:23:07.124377Z","caller":"traceutil/trace.go:171","msg":"trace[178014825] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1843; }","duration":"225.998098ms","start":"2024-09-23T12:23:06.898369Z","end":"2024-09-23T12:23:07.124367Z","steps":["trace[178014825] 'agreement among raft nodes before linearized reading'  (duration: 225.859788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:23:11.419620Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c2991b7348d5d635","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"64.271456ms"}
	{"level":"warn","ts":"2024-09-23T12:23:11.419972Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"4108d1a4ebe19ff4","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"64.62438ms"}
	{"level":"info","ts":"2024-09-23T12:23:11.427495Z","caller":"traceutil/trace.go:171","msg":"trace[1610099113] linearizableReadLoop","detail":"{readStateIndex:2142; appliedIndex:2142; }","duration":"217.530316ms","start":"2024-09-23T12:23:11.209884Z","end":"2024-09-23T12:23:11.427414Z","steps":["trace[1610099113] 'read index received'  (duration: 217.526416ms)","trace[1610099113] 'applied index is now lower than readState.Index'  (duration: 2.8µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T12:23:11.427769Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.936444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-09-23T12:23:11.427900Z","caller":"traceutil/trace.go:171","msg":"trace[1169319988] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1855; }","duration":"218.081153ms","start":"2024-09-23T12:23:11.209810Z","end":"2024-09-23T12:23:11.427891Z","steps":["trace[1169319988] 'agreement among raft nodes before linearized reading'  (duration: 217.894541ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:23:11.428785Z","caller":"traceutil/trace.go:171","msg":"trace[211382232] transaction","detail":"{read_only:false; response_revision:1856; number_of_response:1; }","duration":"266.449403ms","start":"2024-09-23T12:23:11.162324Z","end":"2024-09-23T12:23:11.428773Z","steps":["trace[211382232] 'process raft request'  (duration: 265.442135ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:23:12.850345Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c2991b7348d5d635","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"554.845009ms"}
	{"level":"warn","ts":"2024-09-23T12:23:12.850475Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"4108d1a4ebe19ff4","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"554.978517ms"}
	{"level":"info","ts":"2024-09-23T12:23:12.875199Z","caller":"traceutil/trace.go:171","msg":"trace[1670668863] linearizableReadLoop","detail":"{readStateIndex:2144; appliedIndex:2144; }","duration":"410.830999ms","start":"2024-09-23T12:23:12.464347Z","end":"2024-09-23T12:23:12.875178Z","steps":["trace[1670668863] 'read index received'  (duration: 410.822598ms)","trace[1670668863] 'applied index is now lower than readState.Index'  (duration: 7µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T12:23:12.889464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"425.046653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:434"}
	{"level":"info","ts":"2024-09-23T12:23:12.889879Z","caller":"traceutil/trace.go:171","msg":"trace[1408349449] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:1857; }","duration":"425.538386ms","start":"2024-09-23T12:23:12.464326Z","end":"2024-09-23T12:23:12.889865Z","steps":["trace[1408349449] 'agreement among raft nodes before linearized reading'  (duration: 411.177421ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:23:12.890106Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T12:23:12.464287Z","time spent":"425.800704ms","remote":"127.0.0.1:37642","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":1,"response size":458,"request content":"key:\"/registry/leases/kube-system/plndr-cp-lock\" "}
	{"level":"info","ts":"2024-09-23T12:23:12.891246Z","caller":"traceutil/trace.go:171","msg":"trace[784689872] transaction","detail":"{read_only:false; response_revision:1858; number_of_response:1; }","duration":"178.658902ms","start":"2024-09-23T12:23:12.712568Z","end":"2024-09-23T12:23:12.891227Z","steps":["trace[784689872] 'process raft request'  (duration: 176.55466ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:23:15.695752Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c2991b7348d5d635","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"161.090642ms"}
	{"level":"warn","ts":"2024-09-23T12:23:15.695864Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"4108d1a4ebe19ff4","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"161.223751ms"}
	{"level":"info","ts":"2024-09-23T12:23:15.697229Z","caller":"traceutil/trace.go:171","msg":"trace[19693373] linearizableReadLoop","detail":"{readStateIndex:2154; appliedIndex:2155; }","duration":"240.335035ms","start":"2024-09-23T12:23:15.456880Z","end":"2024-09-23T12:23:15.697215Z","steps":["trace[19693373] 'read index received'  (duration: 240.331035ms)","trace[19693373] 'applied index is now lower than readState.Index'  (duration: 2.8µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T12:23:15.698328Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.479446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-09-23T12:23:15.698383Z","caller":"traceutil/trace.go:171","msg":"trace[627987952] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1865; }","duration":"241.497914ms","start":"2024-09-23T12:23:15.456874Z","end":"2024-09-23T12:23:15.698372Z","steps":["trace[627987952] 'agreement among raft nodes before linearized reading'  (duration: 240.40114ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:23:16.131455Z","caller":"traceutil/trace.go:171","msg":"trace[30434927] transaction","detail":"{read_only:false; response_revision:1867; number_of_response:1; }","duration":"144.571105ms","start":"2024-09-23T12:23:15.986870Z","end":"2024-09-23T12:23:16.131441Z","steps":["trace[30434927] 'process raft request'  (duration: 144.440896ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:24:11.759333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1062}
	{"level":"info","ts":"2024-09-23T12:24:11.917508Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1062,"took":"155.987965ms","hash":712510112,"current-db-size-bytes":3670016,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-09-23T12:24:11.918537Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":712510112,"revision":1062,"compact-revision":-1}
	
	
	==> kernel <==
	 12:24:12 up 11 min,  0 users,  load average: 0.84, 1.07, 0.59
	Linux ha-565300 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ec009d58ec02] <==
	I0923 12:23:32.292488       1 main.go:299] handling current node
	I0923 12:23:42.288964       1 main.go:295] Handling node with IPs: map[172.19.154.133:{}]
	I0923 12:23:42.288999       1 main.go:322] Node ha-565300-m02 has CIDR [10.244.1.0/24] 
	I0923 12:23:42.289180       1 main.go:295] Handling node with IPs: map[172.19.153.80:{}]
	I0923 12:23:42.289206       1 main.go:322] Node ha-565300-m03 has CIDR [10.244.2.0/24] 
	I0923 12:23:42.289268       1 main.go:295] Handling node with IPs: map[172.19.146.194:{}]
	I0923 12:23:42.289287       1 main.go:299] handling current node
	I0923 12:23:52.289768       1 main.go:295] Handling node with IPs: map[172.19.146.194:{}]
	I0923 12:23:52.290323       1 main.go:299] handling current node
	I0923 12:23:52.290376       1 main.go:295] Handling node with IPs: map[172.19.154.133:{}]
	I0923 12:23:52.290401       1 main.go:322] Node ha-565300-m02 has CIDR [10.244.1.0/24] 
	I0923 12:23:52.290666       1 main.go:295] Handling node with IPs: map[172.19.153.80:{}]
	I0923 12:23:52.290749       1 main.go:322] Node ha-565300-m03 has CIDR [10.244.2.0/24] 
	I0923 12:24:02.292534       1 main.go:295] Handling node with IPs: map[172.19.146.194:{}]
	I0923 12:24:02.292736       1 main.go:299] handling current node
	I0923 12:24:02.292758       1 main.go:295] Handling node with IPs: map[172.19.154.133:{}]
	I0923 12:24:02.292936       1 main.go:322] Node ha-565300-m02 has CIDR [10.244.1.0/24] 
	I0923 12:24:02.293288       1 main.go:295] Handling node with IPs: map[172.19.153.80:{}]
	I0923 12:24:02.293380       1 main.go:322] Node ha-565300-m03 has CIDR [10.244.2.0/24] 
	I0923 12:24:12.288205       1 main.go:295] Handling node with IPs: map[172.19.153.80:{}]
	I0923 12:24:12.288253       1 main.go:322] Node ha-565300-m03 has CIDR [10.244.2.0/24] 
	I0923 12:24:12.288442       1 main.go:295] Handling node with IPs: map[172.19.146.194:{}]
	I0923 12:24:12.288456       1 main.go:299] handling current node
	I0923 12:24:12.288469       1 main.go:295] Handling node with IPs: map[172.19.154.133:{}]
	I0923 12:24:12.288474       1 main.go:322] Node ha-565300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [d6fe896ee937] <==
	I0923 12:14:17.525024       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 12:14:17.568176       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 12:14:17.590561       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 12:14:22.450804       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 12:14:23.045413       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0923 12:21:15.212477       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:21:15.213640       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0923 12:21:15.214375       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.801µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0923 12:21:15.215271       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:21:15.319953       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="39.782249ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-565300-m03.17f7dee8f5ebc58e" result=null
	E0923 12:23:15.675950       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56707: use of closed network connection
	E0923 12:23:17.312538       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56709: use of closed network connection
	E0923 12:23:17.800936       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56711: use of closed network connection
	E0923 12:23:18.360662       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56713: use of closed network connection
	E0923 12:23:18.988299       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56715: use of closed network connection
	E0923 12:23:19.453347       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56718: use of closed network connection
	E0923 12:23:19.951394       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56720: use of closed network connection
	E0923 12:23:20.447675       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56722: use of closed network connection
	E0923 12:23:20.912897       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56724: use of closed network connection
	E0923 12:23:21.777764       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56727: use of closed network connection
	E0923 12:23:32.248747       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56729: use of closed network connection
	E0923 12:23:32.707310       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56731: use of closed network connection
	E0923 12:23:43.195771       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56733: use of closed network connection
	E0923 12:23:43.651416       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56736: use of closed network connection
	E0923 12:23:54.123282       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56738: use of closed network connection
	
	
	==> kube-controller-manager [bb14fd3d1b74] <==
	I0923 12:21:43.099854       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m03"
	I0923 12:21:43.583983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m03"
	I0923 12:21:45.304119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m03"
	I0923 12:22:25.491626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="608.695382ms"
	I0923 12:22:25.787334       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="295.323009ms"
	I0923 12:22:28.679829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="2.892442396s"
	I0923 12:22:32.122027       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="3.442126585s"
	E0923 12:22:32.123217       1 replica_set.go:560] "Unhandled Error" err="sync \"default/busybox-7dff88458\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7dff88458\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0923 12:22:32.212779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="88.126094ms"
	I0923 12:22:32.299323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="86.405693ms"
	I0923 12:22:32.299443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.503µs"
	I0923 12:22:34.905279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.404µs"
	I0923 12:22:37.093815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="120.793972ms"
	I0923 12:22:37.094193       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="105.307µs"
	I0923 12:22:37.625570       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.803µs"
	I0923 12:22:37.671257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.918283ms"
	I0923 12:22:37.671937       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.803µs"
	I0923 12:22:38.263655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.734387ms"
	I0923 12:22:38.263883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="137.709µs"
	I0923 12:22:43.916612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m02"
	I0923 12:22:47.702584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m03"
	I0923 12:22:57.831457       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300"
	I0923 12:23:08.321926       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.904µs"
	I0923 12:23:08.979814       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.303µs"
	I0923 12:23:08.988753       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.903µs"
	
	
	==> kube-proxy [5a8e37d9bdb7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:14:24.478901       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:14:24.495540       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.146.194"]
	E0923 12:14:24.495616       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:14:24.556077       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:14:24.556120       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:14:24.556144       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:14:24.559499       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:14:24.559981       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:14:24.560112       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:14:24.561780       1 config.go:199] "Starting service config controller"
	I0923 12:14:24.561830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:14:24.562014       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:14:24.562028       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:14:24.565530       1 config.go:328] "Starting node config controller"
	I0923 12:14:24.565569       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:14:24.662272       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:14:24.662298       1 shared_informer.go:320] Caches are synced for service config
	I0923 12:14:24.665811       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c9ae68aa117] <==
	W0923 12:14:15.491170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 12:14:15.491259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:15.646997       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 12:14:15.647069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:15.703652       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 12:14:15.704401       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 12:14:15.708640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 12:14:15.709158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:15.770193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:14:15.770370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:15.821737       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:14:15.821776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:15.882072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:14:15.882184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 12:14:17.482559       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 12:22:25.442636       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 5f280359-8465-4f3e-9edb-aca9c8fdea2b(default/busybox-7dff88458-86bbx) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-86bbx"
	E0923 12:22:25.465512       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 5f280359-8465-4f3e-9edb-aca9c8fdea2b(default/busybox-7dff88458-86bbx) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-86bbx"
	I0923 12:22:25.465769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-86bbx" node="ha-565300-m03"
	E0923 12:22:25.815213       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 5d4542dc-bc77-4def-a133-8fac51f88c4e(default/busybox-7dff88458-45cpz) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-45cpz"
	E0923 12:22:25.815321       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 5d4542dc-bc77-4def-a133-8fac51f88c4e(default/busybox-7dff88458-45cpz) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-45cpz"
	I0923 12:22:25.815341       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-45cpz" node="ha-565300-m03"
	E0923 12:22:27.254449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qhcgz\": pod busybox-7dff88458-qhcgz is already assigned to node \"ha-565300\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qhcgz" node="ha-565300"
	E0923 12:22:27.297313       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2e3f06e7-cb04-4d02-9613-2b6d50f47a5e(default/busybox-7dff88458-qhcgz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-qhcgz"
	E0923 12:22:27.297359       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qhcgz\": pod busybox-7dff88458-qhcgz is already assigned to node \"ha-565300\"" pod="default/busybox-7dff88458-qhcgz"
	I0923 12:22:27.297398       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qhcgz" node="ha-565300"
	
	
	==> kubelet <==
	Sep 23 12:21:17 ha-565300 kubelet[2314]: E0923 12:21:17.616766    2314 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:21:17 ha-565300 kubelet[2314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:21:17 ha-565300 kubelet[2314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:21:17 ha-565300 kubelet[2314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:21:17 ha-565300 kubelet[2314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:22:17 ha-565300 kubelet[2314]: E0923 12:22:17.616882    2314 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:22:17 ha-565300 kubelet[2314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:22:17 ha-565300 kubelet[2314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:22:17 ha-565300 kubelet[2314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:22:17 ha-565300 kubelet[2314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:22:27 ha-565300 kubelet[2314]: I0923 12:22:27.208437    2314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=476.208416258 podStartE2EDuration="7m56.208416258s" podCreationTimestamp="2024-09-23 12:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-23 12:14:47.117832516 +0000 UTC m=+29.799556645" watchObservedRunningTime="2024-09-23 12:22:27.208416258 +0000 UTC m=+489.890140287"
	Sep 23 12:22:27 ha-565300 kubelet[2314]: I0923 12:22:27.440298    2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7wkq\" (UniqueName: \"kubernetes.io/projected/2e3f06e7-cb04-4d02-9613-2b6d50f47a5e-kube-api-access-s7wkq\") pod \"busybox-7dff88458-qhcgz\" (UID: \"2e3f06e7-cb04-4d02-9613-2b6d50f47a5e\") " pod="default/busybox-7dff88458-qhcgz"
	Sep 23 12:22:27 ha-565300 kubelet[2314]: E0923 12:22:27.615449    2314 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s7wkq], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-7dff88458-qhcgz" podUID="2e3f06e7-cb04-4d02-9613-2b6d50f47a5e"
	Sep 23 12:22:27 ha-565300 kubelet[2314]: I0923 12:22:27.945372    2314 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvm8g\" (UniqueName: \"kubernetes.io/projected/ca25ce71-4101-4f67-9c33-37f1fbcc0060-kube-api-access-bvm8g\") pod \"busybox-7dff88458-rjg7r\" (UID: \"ca25ce71-4101-4f67-9c33-37f1fbcc0060\") " pod="default/busybox-7dff88458-rjg7r"
	Sep 23 12:22:28 ha-565300 kubelet[2314]: E0923 12:22:28.686652    2314 projected.go:194] Error preparing data for projected volume kube-api-access-s7wkq for pod default/busybox-7dff88458-qhcgz: failed to fetch token: pods "busybox-7dff88458-qhcgz" not found
	Sep 23 12:22:28 ha-565300 kubelet[2314]: E0923 12:22:28.772168    2314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e3f06e7-cb04-4d02-9613-2b6d50f47a5e-kube-api-access-s7wkq podName:2e3f06e7-cb04-4d02-9613-2b6d50f47a5e nodeName:}" failed. No retries permitted until 2024-09-23 12:22:29.218292857 +0000 UTC m=+491.900016886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s7wkq" (UniqueName: "kubernetes.io/projected/2e3f06e7-cb04-4d02-9613-2b6d50f47a5e-kube-api-access-s7wkq") pod "busybox-7dff88458-qhcgz" (UID: "2e3f06e7-cb04-4d02-9613-2b6d50f47a5e") : failed to fetch token: pods "busybox-7dff88458-qhcgz" not found
	Sep 23 12:22:28 ha-565300 kubelet[2314]: I0923 12:22:28.781075    2314 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-s7wkq\" (UniqueName: \"kubernetes.io/projected/2e3f06e7-cb04-4d02-9613-2b6d50f47a5e-kube-api-access-s7wkq\") on node \"ha-565300\" DevicePath \"\""
	Sep 23 12:22:29 ha-565300 kubelet[2314]: I0923 12:22:29.562957    2314 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e3f06e7-cb04-4d02-9613-2b6d50f47a5e" path="/var/lib/kubelet/pods/2e3f06e7-cb04-4d02-9613-2b6d50f47a5e/volumes"
	Sep 23 12:22:32 ha-565300 kubelet[2314]: I0923 12:22:32.517177    2314 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94d7ba7dd4e11e602b396a5754f5a9c0a4d8b23595aafe2181de568836040596"
	Sep 23 12:23:17 ha-565300 kubelet[2314]: E0923 12:23:17.621732    2314 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:23:17 ha-565300 kubelet[2314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:23:17 ha-565300 kubelet[2314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:23:17 ha-565300 kubelet[2314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:23:17 ha-565300 kubelet[2314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:23:20 ha-565300 kubelet[2314]: E0923 12:23:20.449992    2314 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59212->127.0.0.1:35607: write tcp 127.0.0.1:59212->127.0.0.1:35607: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-565300 -n ha-565300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-565300 -n ha-565300: (11.0406329s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (64.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (163.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 node start m02 -v=7 --alsologtostderr
E0923 12:40:29.833642    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 node start m02 -v=7 --alsologtostderr: exit status 1 (1m29.4630809s)

                                                
                                                
-- stdout --
	* Starting "ha-565300-m02" control-plane node in "ha-565300" cluster
	* Restarting existing hyperv VM for "ha-565300-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:40:03.647047    4032 out.go:345] Setting OutFile to fd 1824 ...
	I0923 12:40:03.713581    4032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:40:03.713651    4032 out.go:358] Setting ErrFile to fd 1832...
	I0923 12:40:03.713720    4032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:40:03.725434    4032 mustload.go:65] Loading cluster: ha-565300
	I0923 12:40:03.725858    4032 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:40:03.726532    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:05.641694    4032 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 12:40:05.641694    4032 main.go:141] libmachine: [stderr =====>] : 
	W0923 12:40:05.641694    4032 host.go:58] "ha-565300-m02" host status: Stopped
	I0923 12:40:05.644720    4032 out.go:177] * Starting "ha-565300-m02" control-plane node in "ha-565300" cluster
	I0923 12:40:05.647023    4032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:40:05.647075    4032 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:40:05.647075    4032 cache.go:56] Caching tarball of preloaded images
	I0923 12:40:05.647653    4032 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:40:05.647653    4032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:40:05.647653    4032 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:40:05.649824    4032 start.go:360] acquireMachinesLock for ha-565300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:40:05.649824    4032 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-565300-m02"
	I0923 12:40:05.650437    4032 start.go:96] Skipping create...Using existing machine configuration
	I0923 12:40:05.650437    4032 fix.go:54] fixHost starting: m02
	I0923 12:40:05.651025    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:07.569945    4032 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 12:40:07.569945    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:07.570170    4032 fix.go:112] recreateIfNeeded on ha-565300-m02: state=Stopped err=<nil>
	W0923 12:40:07.570170    4032 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 12:40:07.574067    4032 out.go:177] * Restarting existing hyperv VM for "ha-565300-m02" ...
	I0923 12:40:07.575786    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-565300-m02
	I0923 12:40:10.386241    4032 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:40:10.386300    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:10.386300    4032 main.go:141] libmachine: Waiting for host to start...
	I0923 12:40:10.386300    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:12.390308    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:12.390675    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:12.390754    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:14.626706    4032 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:40:14.627248    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:15.627598    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:17.565513    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:17.565513    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:17.565513    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:19.832140    4032 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:40:19.832140    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:20.833453    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:22.830153    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:22.830153    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:22.830153    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:25.064339    4032 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:40:25.064339    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:26.064791    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:28.070380    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:28.070380    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:28.070897    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:30.322859    4032 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:40:30.323866    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:31.324661    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:33.312663    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:33.312717    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:33.312717    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:35.644671    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:40:35.644671    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:35.646787    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:37.554520    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:37.554520    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:37.554975    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:39.823745    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:40:39.823888    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:39.823888    4032 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:40:39.825942    4032 machine.go:93] provisionDockerMachine start ...
	I0923 12:40:39.826679    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:41.719144    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:41.719144    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:41.719275    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:43.983142    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:40:43.983142    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:43.989119    4032 main.go:141] libmachine: Using SSH client type: native
	I0923 12:40:43.989328    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
	I0923 12:40:43.989328    4032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:40:44.124881    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:40:44.124881    4032 buildroot.go:166] provisioning hostname "ha-565300-m02"
	I0923 12:40:44.125016    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:45.975413    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:45.975413    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:45.975492    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:48.205341    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:40:48.205341    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:48.209380    4032 main.go:141] libmachine: Using SSH client type: native
	I0923 12:40:48.209852    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
	I0923 12:40:48.209911    4032 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565300-m02 && echo "ha-565300-m02" | sudo tee /etc/hostname
	I0923 12:40:48.364124    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565300-m02
	
	I0923 12:40:48.364124    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:50.257873    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:50.258563    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:50.258651    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:52.475591    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:40:52.475591    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:52.482600    4032 main.go:141] libmachine: Using SSH client type: native
	I0923 12:40:52.483125    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
	I0923 12:40:52.483125    4032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:40:52.617943    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:40:52.617943    4032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 12:40:52.617943    4032 buildroot.go:174] setting up certificates
	I0923 12:40:52.617943    4032 provision.go:84] configureAuth start
	I0923 12:40:52.617943    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:54.513730    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:54.514188    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:54.514188    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:40:56.750292    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:40:56.750292    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:56.750292    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:40:58.611971    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:40:58.611971    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:40:58.612389    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:41:00.846831    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:41:00.847144    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:00.847347    4032 provision.go:143] copyHostCerts
	I0923 12:41:00.847532    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 12:41:00.847851    4032 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:41:00.847851    4032 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 12:41:00.848133    4032 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 12:41:00.849276    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 12:41:00.849276    4032 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:41:00.849276    4032 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 12:41:00.849865    4032 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:41:00.850735    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 12:41:00.850954    4032 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:41:00.850954    4032 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 12:41:00.851245    4032 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 12:41:00.852198    4032 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-565300-m02 san=[127.0.0.1 172.19.150.121 ha-565300-m02 localhost minikube]
	I0923 12:41:01.010750    4032 provision.go:177] copyRemoteCerts
	I0923 12:41:01.018749    4032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:41:01.018749    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:41:02.928809    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:41:02.928809    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:02.929472    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:41:05.176900    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:41:05.177448    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:05.177758    4032 sshutil.go:53] new ssh client: &{IP:172.19.150.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:41:05.286031    4032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2669947s)
	I0923 12:41:05.286135    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 12:41:05.286523    4032 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:41:05.339683    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 12:41:05.339683    4032 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:41:05.387760    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 12:41:05.388051    4032 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:41:05.431244    4032 provision.go:87] duration metric: took 12.8122076s to configureAuth
	I0923 12:41:05.431346    4032 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:41:05.432159    4032 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:41:05.432296    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:41:07.323033    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:41:07.323088    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:07.323172    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:41:09.578646    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:41:09.579554    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:09.583121    4032 main.go:141] libmachine: Using SSH client type: native
	I0923 12:41:09.583710    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
	I0923 12:41:09.583710    4032 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:41:09.715688    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:41:09.715688    4032 buildroot.go:70] root file system type: tmpfs
	I0923 12:41:09.715688    4032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:41:09.715688    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:41:11.587192    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:41:11.587877    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:11.587951    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:41:13.844259    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:41:13.844259    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:13.848684    4032 main.go:141] libmachine: Using SSH client type: native
	I0923 12:41:13.849313    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
	I0923 12:41:13.849313    4032 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:41:14.007926    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:41:14.007991    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:41:15.877837    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:41:15.878720    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:15.878906    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:41:18.122217    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:41:18.123174    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:18.127165    4032 main.go:141] libmachine: Using SSH client type: native
	I0923 12:41:18.127235    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
	I0923 12:41:18.127235    4032 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:41:20.567719    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:41:20.567719    4032 machine.go:96] duration metric: took 40.7390268s to provisionDockerMachine
	I0923 12:41:20.567719    4032 start.go:293] postStartSetup for "ha-565300-m02" (driver="hyperv")
	I0923 12:41:20.567719    4032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:41:20.576309    4032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:41:20.576309    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:41:22.456671    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:41:22.457719    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:22.457787    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:41:24.687817    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:41:24.688149    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:24.688149    4032 sshutil.go:53] new ssh client: &{IP:172.19.150.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:41:24.793228    4032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2166349s)
	I0923 12:41:24.802845    4032 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:41:24.809064    4032 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:41:24.809064    4032 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 12:41:24.809912    4032 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 12:41:24.811205    4032 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 12:41:24.811205    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 12:41:24.822721    4032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:41:24.840928    4032 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 12:41:24.887844    4032 start.go:296] duration metric: took 4.3198333s for postStartSetup
	I0923 12:41:24.896301    4032 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0923 12:41:24.896301    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:41:26.727445    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:41:26.727445    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:26.728257    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:41:28.964795    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121
	
	I0923 12:41:28.964795    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:28.965509    4032 sshutil.go:53] new ssh client: &{IP:172.19.150.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:41:29.068457    4032 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.1718742s)
	I0923 12:41:29.068633    4032 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I0923 12:41:29.082007    4032 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0923 12:41:29.133943    4032 fix.go:56] duration metric: took 1m23.4777996s for fixHost
	I0923 12:41:29.134004    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:41:31.018434    4032 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:41:31.018434    4032 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:41:31.018434    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:422: I0923 12:40:03.647047    4032 out.go:345] Setting OutFile to fd 1824 ...
I0923 12:40:03.713581    4032 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:40:03.713651    4032 out.go:358] Setting ErrFile to fd 1832...
I0923 12:40:03.713720    4032 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:40:03.725434    4032 mustload.go:65] Loading cluster: ha-565300
I0923 12:40:03.725858    4032 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:40:03.726532    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:05.641694    4032 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0923 12:40:05.641694    4032 main.go:141] libmachine: [stderr =====>] : 
W0923 12:40:05.641694    4032 host.go:58] "ha-565300-m02" host status: Stopped
I0923 12:40:05.644720    4032 out.go:177] * Starting "ha-565300-m02" control-plane node in "ha-565300" cluster
I0923 12:40:05.647023    4032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 12:40:05.647075    4032 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
I0923 12:40:05.647075    4032 cache.go:56] Caching tarball of preloaded images
I0923 12:40:05.647653    4032 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0923 12:40:05.647653    4032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0923 12:40:05.647653    4032 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
I0923 12:40:05.649824    4032 start.go:360] acquireMachinesLock for ha-565300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0923 12:40:05.649824    4032 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-565300-m02"
I0923 12:40:05.650437    4032 start.go:96] Skipping create...Using existing machine configuration
I0923 12:40:05.650437    4032 fix.go:54] fixHost starting: m02
I0923 12:40:05.651025    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:07.569945    4032 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0923 12:40:07.569945    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:07.570170    4032 fix.go:112] recreateIfNeeded on ha-565300-m02: state=Stopped err=<nil>
W0923 12:40:07.570170    4032 fix.go:138] unexpected machine state, will restart: <nil>
I0923 12:40:07.574067    4032 out.go:177] * Restarting existing hyperv VM for "ha-565300-m02" ...
I0923 12:40:07.575786    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-565300-m02
I0923 12:40:10.386241    4032 main.go:141] libmachine: [stdout =====>] : 
I0923 12:40:10.386300    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:10.386300    4032 main.go:141] libmachine: Waiting for host to start...
I0923 12:40:10.386300    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:12.390308    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:12.390675    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:12.390754    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:14.626706    4032 main.go:141] libmachine: [stdout =====>] : 
I0923 12:40:14.627248    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:15.627598    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:17.565513    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:17.565513    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:17.565513    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:19.832140    4032 main.go:141] libmachine: [stdout =====>] : 
I0923 12:40:19.832140    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:20.833453    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:22.830153    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:22.830153    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:22.830153    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:25.064339    4032 main.go:141] libmachine: [stdout =====>] : 
I0923 12:40:25.064339    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:26.064791    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:28.070380    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:28.070380    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:28.070897    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:30.322859    4032 main.go:141] libmachine: [stdout =====>] : 
I0923 12:40:30.323866    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:31.324661    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:33.312663    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:33.312717    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:33.312717    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:35.644671    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:40:35.644671    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:35.646787    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:37.554520    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:37.554520    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:37.554975    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:39.823745    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:40:39.823888    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:39.823888    4032 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
I0923 12:40:39.825942    4032 machine.go:93] provisionDockerMachine start ...
I0923 12:40:39.826679    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:41.719144    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:41.719144    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:41.719275    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:43.983142    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:40:43.983142    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:43.989119    4032 main.go:141] libmachine: Using SSH client type: native
I0923 12:40:43.989328    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
I0923 12:40:43.989328    4032 main.go:141] libmachine: About to run SSH command:
hostname
I0923 12:40:44.124881    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0923 12:40:44.124881    4032 buildroot.go:166] provisioning hostname "ha-565300-m02"
I0923 12:40:44.125016    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:45.975413    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:45.975413    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:45.975492    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:48.205341    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:40:48.205341    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:48.209380    4032 main.go:141] libmachine: Using SSH client type: native
I0923 12:40:48.209852    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
I0923 12:40:48.209911    4032 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-565300-m02 && echo "ha-565300-m02" | sudo tee /etc/hostname
I0923 12:40:48.364124    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565300-m02

                                                
                                                
I0923 12:40:48.364124    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:50.257873    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:50.258563    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:50.258651    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:52.475591    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:40:52.475591    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:52.482600    4032 main.go:141] libmachine: Using SSH client type: native
I0923 12:40:52.483125    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
I0923 12:40:52.483125    4032 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-565300-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565300-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-565300-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0923 12:40:52.617943    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0923 12:40:52.617943    4032 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
I0923 12:40:52.617943    4032 buildroot.go:174] setting up certificates
I0923 12:40:52.617943    4032 provision.go:84] configureAuth start
I0923 12:40:52.617943    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:54.513730    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:54.514188    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:54.514188    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:40:56.750292    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:40:56.750292    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:56.750292    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:40:58.611971    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:40:58.611971    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:40:58.612389    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:41:00.846831    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:41:00.847144    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:00.847347    4032 provision.go:143] copyHostCerts
I0923 12:41:00.847532    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
I0923 12:41:00.847851    4032 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
I0923 12:41:00.847851    4032 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
I0923 12:41:00.848133    4032 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
I0923 12:41:00.849276    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
I0923 12:41:00.849276    4032 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
I0923 12:41:00.849276    4032 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
I0923 12:41:00.849865    4032 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
I0923 12:41:00.850735    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
I0923 12:41:00.850954    4032 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
I0923 12:41:00.850954    4032 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
I0923 12:41:00.851245    4032 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
I0923 12:41:00.852198    4032 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-565300-m02 san=[127.0.0.1 172.19.150.121 ha-565300-m02 localhost minikube]
I0923 12:41:01.010750    4032 provision.go:177] copyRemoteCerts
I0923 12:41:01.018749    4032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0923 12:41:01.018749    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:41:02.928809    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:41:02.928809    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:02.929472    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:41:05.176900    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:41:05.177448    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:05.177758    4032 sshutil.go:53] new ssh client: &{IP:172.19.150.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
I0923 12:41:05.286031    4032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2669947s)
I0923 12:41:05.286135    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0923 12:41:05.286523    4032 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0923 12:41:05.339683    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0923 12:41:05.339683    4032 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0923 12:41:05.387760    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0923 12:41:05.388051    4032 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0923 12:41:05.431244    4032 provision.go:87] duration metric: took 12.8122076s to configureAuth
I0923 12:41:05.431346    4032 buildroot.go:189] setting minikube options for container-runtime
I0923 12:41:05.432159    4032 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:41:05.432296    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:41:07.323033    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:41:07.323088    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:07.323172    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:41:09.578646    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:41:09.579554    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:09.583121    4032 main.go:141] libmachine: Using SSH client type: native
I0923 12:41:09.583710    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
I0923 12:41:09.583710    4032 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0923 12:41:09.715688    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0923 12:41:09.715688    4032 buildroot.go:70] root file system type: tmpfs
I0923 12:41:09.715688    4032 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0923 12:41:09.715688    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:41:11.587192    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:41:11.587877    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:11.587951    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:41:13.844259    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:41:13.844259    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:13.848684    4032 main.go:141] libmachine: Using SSH client type: native
I0923 12:41:13.849313    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
I0923 12:41:13.849313    4032 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0923 12:41:14.007926    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0923 12:41:14.007991    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:41:15.877837    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:41:15.878720    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:15.878906    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:41:18.122217    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:41:18.123174    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:18.127165    4032 main.go:141] libmachine: Using SSH client type: native
I0923 12:41:18.127235    4032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.150.121 22 <nil> <nil>}
I0923 12:41:18.127235    4032 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0923 12:41:20.567719    4032 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0923 12:41:20.567719    4032 machine.go:96] duration metric: took 40.7390268s to provisionDockerMachine
I0923 12:41:20.567719    4032 start.go:293] postStartSetup for "ha-565300-m02" (driver="hyperv")
I0923 12:41:20.567719    4032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0923 12:41:20.576309    4032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0923 12:41:20.576309    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:41:22.456671    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:41:22.457719    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:22.457787    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:41:24.687817    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:41:24.688149    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:24.688149    4032 sshutil.go:53] new ssh client: &{IP:172.19.150.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
I0923 12:41:24.793228    4032 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2166349s)
I0923 12:41:24.802845    4032 ssh_runner.go:195] Run: cat /etc/os-release
I0923 12:41:24.809064    4032 info.go:137] Remote host: Buildroot 2023.02.9
I0923 12:41:24.809064    4032 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
I0923 12:41:24.809912    4032 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
I0923 12:41:24.811205    4032 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
I0923 12:41:24.811205    4032 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
I0923 12:41:24.822721    4032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0923 12:41:24.840928    4032 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
I0923 12:41:24.887844    4032 start.go:296] duration metric: took 4.3198333s for postStartSetup
I0923 12:41:24.896301    4032 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0923 12:41:24.896301    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:41:26.727445    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:41:26.727445    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:26.728257    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
I0923 12:41:28.964795    4032 main.go:141] libmachine: [stdout =====>] : 172.19.150.121

                                                
                                                
I0923 12:41:28.964795    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:28.965509    4032 sshutil.go:53] new ssh client: &{IP:172.19.150.121 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
I0923 12:41:29.068457    4032 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.1718742s)
I0923 12:41:29.068633    4032 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
I0923 12:41:29.082007    4032 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0923 12:41:29.133943    4032 fix.go:56] duration metric: took 1m23.4777996s for fixHost
I0923 12:41:29.134004    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
I0923 12:41:31.018434    4032 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0923 12:41:31.018434    4032 main.go:141] libmachine: [stderr =====>] : 
I0923 12:41:31.018434    4032 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-565300 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (62µs)
I0923 12:41:33.120927    3844 retry.go:31] will retry after 977.790029ms: context deadline exceeded
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0923 12:41:34.099586    3844 retry.go:31] will retry after 1.115964533s: context deadline exceeded
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0923 12:41:35.216023    3844 retry.go:31] will retry after 3.090103634s: context deadline exceeded
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0923 12:41:38.307055    3844 retry.go:31] will retry after 3.208261169s: context deadline exceeded
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0923 12:41:41.515971    3844 retry.go:31] will retry after 3.858683654s: context deadline exceeded
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0923 12:41:45.376502    3844 retry.go:31] will retry after 10.528054244s: context deadline exceeded
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (529.4µs)
I0923 12:41:55.906132    3844 retry.go:31] will retry after 8.631311636s: context deadline exceeded
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0923 12:42:04.538675    3844 retry.go:31] will retry after 12.426919132s: context deadline exceeded
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-565300 -n ha-565300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-565300 -n ha-565300: (10.4562613s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 logs -n 25: (7.5515821s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:34 UTC | 23 Sep 24 12:34 UTC |
	|         | ha-565300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:34 UTC | 23 Sep 24 12:34 UTC |
	|         | ha-565300:/home/docker/cp-test_ha-565300-m03_ha-565300.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:34 UTC | 23 Sep 24 12:35 UTC |
	|         | ha-565300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n ha-565300 sudo cat                                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:35 UTC | 23 Sep 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-565300-m03_ha-565300.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:35 UTC | 23 Sep 24 12:35 UTC |
	|         | ha-565300-m02:/home/docker/cp-test_ha-565300-m03_ha-565300-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:35 UTC | 23 Sep 24 12:35 UTC |
	|         | ha-565300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n ha-565300-m02 sudo cat                                                                                   | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:35 UTC | 23 Sep 24 12:35 UTC |
	|         | /home/docker/cp-test_ha-565300-m03_ha-565300-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:35 UTC | 23 Sep 24 12:35 UTC |
	|         | ha-565300-m04:/home/docker/cp-test_ha-565300-m03_ha-565300-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:35 UTC | 23 Sep 24 12:36 UTC |
	|         | ha-565300-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n ha-565300-m04 sudo cat                                                                                   | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:36 UTC | 23 Sep 24 12:36 UTC |
	|         | /home/docker/cp-test_ha-565300-m03_ha-565300-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-565300 cp testdata\cp-test.txt                                                                                         | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:36 UTC | 23 Sep 24 12:36 UTC |
	|         | ha-565300-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:36 UTC | 23 Sep 24 12:36 UTC |
	|         | ha-565300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:36 UTC | 23 Sep 24 12:36 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:36 UTC | 23 Sep 24 12:36 UTC |
	|         | ha-565300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:36 UTC | 23 Sep 24 12:37 UTC |
	|         | ha-565300:/home/docker/cp-test_ha-565300-m04_ha-565300.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | ha-565300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n ha-565300 sudo cat                                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | /home/docker/cp-test_ha-565300-m04_ha-565300.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | ha-565300-m02:/home/docker/cp-test_ha-565300-m04_ha-565300-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | ha-565300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n ha-565300-m02 sudo cat                                                                                   | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | /home/docker/cp-test_ha-565300-m04_ha-565300-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt                                                                       | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:38 UTC |
	|         | ha-565300-m03:/home/docker/cp-test_ha-565300-m04_ha-565300-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n                                                                                                          | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | ha-565300-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-565300 ssh -n ha-565300-m03 sudo cat                                                                                   | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | /home/docker/cp-test_ha-565300-m04_ha-565300-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-565300 node stop m02 -v=7                                                                                              | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	| node    | ha-565300 node start m02 -v=7                                                                                             | ha-565300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 12:40 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:11:33
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:11:32.978079    3340 out.go:345] Setting OutFile to fd 1532 ...
	I0923 12:11:33.023194    3340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:11:33.023194    3340 out.go:358] Setting ErrFile to fd 1356...
	I0923 12:11:33.023194    3340 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:11:33.040255    3340 out.go:352] Setting JSON to false
	I0923 12:11:33.042224    3340 start.go:129] hostinfo: {"hostname":"minikube5","uptime":489469,"bootTime":1726604023,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 12:11:33.042224    3340 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 12:11:33.047289    3340 out.go:177] * [ha-565300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 12:11:33.050785    3340 notify.go:220] Checking for updates...
	I0923 12:11:33.050785    3340 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:11:33.053483    3340 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:11:33.056631    3340 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 12:11:33.058975    3340 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:11:33.061367    3340 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:11:33.064125    3340 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:11:37.823204    3340 out.go:177] * Using the hyperv driver based on user configuration
	I0923 12:11:37.827034    3340 start.go:297] selected driver: hyperv
	I0923 12:11:37.827034    3340 start.go:901] validating driver "hyperv" against <nil>
	I0923 12:11:37.827034    3340 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:11:37.868172    3340 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 12:11:37.869018    3340 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:11:37.869018    3340 cni.go:84] Creating CNI manager for ""
	I0923 12:11:37.869018    3340 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 12:11:37.869018    3340 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 12:11:37.870017    3340 start.go:340] cluster config:
	{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:11:37.870017    3340 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:11:37.874629    3340 out.go:177] * Starting "ha-565300" primary control-plane node in "ha-565300" cluster
	I0923 12:11:37.876732    3340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:11:37.877730    3340 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:11:37.877730    3340 cache.go:56] Caching tarball of preloaded images
	I0923 12:11:37.878097    3340 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:11:37.878097    3340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:11:37.878742    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:11:37.879180    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json: {Name:mkc75814a813493ad95a286b802d19c495eecb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:11:37.880387    3340 start.go:360] acquireMachinesLock for ha-565300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:11:37.880387    3340 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-565300"
	I0923 12:11:37.880614    3340 start.go:93] Provisioning new machine with config: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:11:37.880819    3340 start.go:125] createHost starting for "" (driver="hyperv")
	I0923 12:11:37.883223    3340 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:11:37.883801    3340 start.go:159] libmachine.API.Create for "ha-565300" (driver="hyperv")
	I0923 12:11:37.883801    3340 client.go:168] LocalClient.Create starting
	I0923 12:11:37.883801    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:11:37.884384    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:11:37.884980    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 12:11:39.701346    3340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 12:11:39.701346    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:39.701627    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 12:11:41.206711    3340 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 12:11:41.206711    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:41.206844    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:11:42.563585    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:11:42.563585    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:42.564162    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:11:45.601239    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:11:45.601239    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:45.604124    3340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:11:45.987938    3340 main.go:141] libmachine: Creating SSH key...
	I0923 12:11:46.263141    3340 main.go:141] libmachine: Creating VM...
	I0923 12:11:46.263141    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:11:48.690486    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:11:48.690486    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:48.691214    3340 main.go:141] libmachine: Using switch "Default Switch"
	I0923 12:11:48.691281    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:11:50.199853    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:11:50.199853    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:50.199853    3340 main.go:141] libmachine: Creating VHD
	I0923 12:11:50.200263    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 12:11:53.525044    3340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CCCE98BF-FC9E-4970-B4A7-8EDBBFA23647
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 12:11:53.525456    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:53.525456    3340 main.go:141] libmachine: Writing magic tar header
	I0923 12:11:53.525456    3340 main.go:141] libmachine: Writing SSH key tar header
	I0923 12:11:53.534386    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 12:11:56.390814    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:11:56.390814    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:56.391947    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\disk.vhd' -SizeBytes 20000MB
	I0923 12:11:58.708762    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:11:58.709454    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:11:58.709534    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-565300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0923 12:12:02.027536    3340 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-565300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 12:12:02.027536    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:02.027636    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-565300 -DynamicMemoryEnabled $false
	I0923 12:12:03.974860    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:03.974860    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:03.974992    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-565300 -Count 2
	I0923 12:12:05.846962    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:05.847410    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:05.847485    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-565300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\boot2docker.iso'
	I0923 12:12:08.127192    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:08.127192    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:08.127778    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-565300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\disk.vhd'
	I0923 12:12:10.430666    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:10.431391    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:10.431391    3340 main.go:141] libmachine: Starting VM...
	I0923 12:12:10.431391    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-565300
	I0923 12:12:13.161930    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:13.161930    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:13.161930    3340 main.go:141] libmachine: Waiting for host to start...
	I0923 12:12:13.162911    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:15.206016    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:15.206016    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:15.206016    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:17.427988    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:17.427988    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:18.428717    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:20.362345    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:20.362345    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:20.362345    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:22.566265    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:22.566265    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:23.567445    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:25.471861    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:25.471895    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:25.472091    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:27.630283    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:27.630469    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:28.631003    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:30.586422    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:30.586422    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:30.586541    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:32.814531    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:12:32.814564    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:33.815259    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:35.722593    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:35.723495    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:35.723495    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:38.134812    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:38.134812    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:38.134812    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:39.998267    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:39.998267    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:39.998267    3340 machine.go:93] provisionDockerMachine start ...
	I0923 12:12:39.998267    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:41.857405    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:41.857405    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:41.857405    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:44.056159    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:44.056355    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:44.060816    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:12:44.071225    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:12:44.072228    3340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:12:44.204546    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:12:44.204716    3340 buildroot.go:166] provisioning hostname "ha-565300"
	I0923 12:12:44.204716    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:46.026771    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:46.027420    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:46.027420    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:48.183257    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:48.183257    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:48.187828    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:12:48.188088    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:12:48.188088    3340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565300 && echo "ha-565300" | sudo tee /etc/hostname
	I0923 12:12:48.333214    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565300
	
	I0923 12:12:48.333214    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:50.153533    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:50.153533    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:50.154332    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:52.281467    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:52.281663    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:52.284766    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:12:52.285377    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:12:52.285377    3340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565300/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:12:52.421094    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:12:52.421094    3340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 12:12:52.421094    3340 buildroot.go:174] setting up certificates
	I0923 12:12:52.421094    3340 provision.go:84] configureAuth start
	I0923 12:12:52.421094    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:54.288167    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:54.289169    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:54.289339    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:12:56.464735    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:12:56.464932    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:56.464932    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:12:58.272016    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:12:58.272357    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:12:58.272357    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:00.489531    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:00.490357    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:00.490357    3340 provision.go:143] copyHostCerts
	I0923 12:13:00.490504    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 12:13:00.490742    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:13:00.490815    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 12:13:00.490951    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 12:13:00.492216    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 12:13:00.492388    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:13:00.492469    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 12:13:00.492735    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 12:13:00.493436    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 12:13:00.493623    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:13:00.493705    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 12:13:00.493873    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:13:00.494709    3340 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-565300 san=[127.0.0.1 172.19.146.194 ha-565300 localhost minikube]
	I0923 12:13:00.640683    3340 provision.go:177] copyRemoteCerts
	I0923 12:13:00.648701    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:13:00.648701    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:02.519203    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:02.519203    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:02.519304    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:04.702811    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:04.702811    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:04.704396    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:13:04.808977    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1599958s)
	I0923 12:13:04.809182    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 12:13:04.809924    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:13:04.851429    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 12:13:04.851996    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0923 12:13:04.894522    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 12:13:04.894963    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:13:04.934660    3340 provision.go:87] duration metric: took 12.5126428s to configureAuth
	I0923 12:13:04.934718    3340 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:13:04.935568    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:13:04.935639    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:06.758111    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:06.758111    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:06.758111    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:08.929281    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:08.929813    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:08.933512    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:08.933512    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:08.933512    3340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:13:09.064168    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:13:09.064168    3340 buildroot.go:70] root file system type: tmpfs
	I0923 12:13:09.064168    3340 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:13:09.064168    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:10.884061    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:10.884061    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:10.884061    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:13.065579    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:13.065579    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:13.069481    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:13.069865    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:13.069935    3340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:13:13.208408    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:13:13.208928    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:15.042932    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:15.042932    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:15.043293    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:17.257188    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:17.257188    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:17.261145    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:17.261483    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:17.261574    3340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:13:19.349851    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:13:19.350460    3340 machine.go:96] duration metric: took 39.3494836s to provisionDockerMachine
	I0923 12:13:19.350460    3340 client.go:171] duration metric: took 1m41.4598151s to LocalClient.Create
	I0923 12:13:19.350460    3340 start.go:167] duration metric: took 1m41.4598151s to libmachine.API.Create "ha-565300"
	I0923 12:13:19.350460    3340 start.go:293] postStartSetup for "ha-565300" (driver="hyperv")
	I0923 12:13:19.350460    3340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:13:19.358805    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:13:19.358805    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:21.205616    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:21.206341    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:21.206341    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:23.395568    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:23.395568    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:23.396574    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:13:23.498654    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1395702s)
	I0923 12:13:23.506687    3340 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:13:23.514435    3340 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:13:23.514435    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 12:13:23.515075    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 12:13:23.515887    3340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 12:13:23.515887    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 12:13:23.526331    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:13:23.542677    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 12:13:23.585065    3340 start.go:296] duration metric: took 4.2343196s for postStartSetup
	I0923 12:13:23.588975    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:25.439069    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:25.439069    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:25.439403    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:27.644756    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:27.644756    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:27.645714    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:13:27.647742    3340 start.go:128] duration metric: took 1m49.7595184s to createHost
	I0923 12:13:27.647742    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:29.482844    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:29.482844    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:29.483258    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:31.661842    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:31.662840    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:31.666349    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:31.666876    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:31.666876    3340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:13:31.782986    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727093611.991103345
	
	I0923 12:13:31.783098    3340 fix.go:216] guest clock: 1727093611.991103345
	I0923 12:13:31.783098    3340 fix.go:229] Guest: 2024-09-23 12:13:31.991103345 +0000 UTC Remote: 2024-09-23 12:13:27.6477425 +0000 UTC m=+114.732820001 (delta=4.343360845s)
	I0923 12:13:31.783244    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:33.636121    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:33.636121    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:33.636517    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:35.803805    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:35.803805    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:35.809876    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:13:35.810324    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.146.194 22 <nil> <nil>}
	I0923 12:13:35.810324    3340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727093611
	I0923 12:13:35.947901    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 12:13:31 UTC 2024
	
	I0923 12:13:35.947901    3340 fix.go:236] clock set: Mon Sep 23 12:13:31 UTC 2024
	 (err=<nil>)
	I0923 12:13:35.947901    3340 start.go:83] releasing machines lock for "ha-565300", held for 1m58.0595495s
	I0923 12:13:35.948449    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:37.799890    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:37.799890    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:37.799890    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:39.965345    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:39.965345    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:39.969481    3340 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 12:13:39.969548    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:39.975699    3340 ssh_runner.go:195] Run: cat /version.json
	I0923 12:13:39.976250    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:13:41.873769    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:41.873769    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:41.873959    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:41.877437    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:13:41.877437    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:41.877547    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:13:44.154760    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:44.154835    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:44.154985    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:13:44.177520    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:13:44.177685    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:13:44.177969    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:13:44.254371    3340 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.2845337s)
	W0923 12:13:44.254524    3340 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 12:13:44.269720    3340 ssh_runner.go:235] Completed: cat /version.json: (4.2931809s)
	I0923 12:13:44.279758    3340 ssh_runner.go:195] Run: systemctl --version
	I0923 12:13:44.295780    3340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 12:13:44.303293    3340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:13:44.311767    3340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:13:44.338154    3340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:13:44.338269    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:13:44.338442    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0923 12:13:44.352524    3340 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 12:13:44.352524    3340 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 12:13:44.383799    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:13:44.416135    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:13:44.439326    3340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:13:44.452051    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:13:44.483142    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:13:44.508284    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:13:44.536304    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:13:44.562583    3340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:13:44.588400    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:13:44.614675    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:13:44.643861    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:13:44.670729    3340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:13:44.688132    3340 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:13:44.696881    3340 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:13:44.728075    3340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:13:44.750972    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:13:44.910487    3340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:13:44.936784    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:13:44.948451    3340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:13:44.979227    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:13:45.008231    3340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:13:45.046366    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:13:45.078463    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:13:45.110043    3340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:13:45.172398    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:13:45.194360    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:13:45.234926    3340 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:13:45.253370    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:13:45.268961    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:13:45.304239    3340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:13:45.467376    3340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:13:45.637903    3340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:13:45.638248    3340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:13:45.679959    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:13:45.867148    3340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:13:48.398044    3340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5305671s)
	I0923 12:13:48.409146    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:13:48.440062    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:13:48.469276    3340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:13:48.649341    3340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:13:48.842024    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:13:49.026941    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:13:49.065458    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:13:49.094000    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:13:49.265557    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:13:49.368777    3340 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:13:49.379784    3340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:13:49.388737    3340 start.go:563] Will wait 60s for crictl version
	I0923 12:13:49.398183    3340 ssh_runner.go:195] Run: which crictl
	I0923 12:13:49.412352    3340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:13:49.458715    3340 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:13:49.470375    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:13:49.506083    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:13:49.537704    3340 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:13:49.537896    3340 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 12:13:49.541799    3340 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 12:13:49.541799    3340 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 12:13:49.541799    3340 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 12:13:49.541799    3340 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 12:13:49.544485    3340 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 12:13:49.544485    3340 ip.go:214] interface addr: 172.19.144.1/20
	I0923 12:13:49.552109    3340 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 12:13:49.558776    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:13:49.588336    3340 kubeadm.go:883] updating cluster {Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:13:49.588336    3340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:13:49.594106    3340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:13:49.616419    3340 docker.go:685] Got preloaded images: 
	I0923 12:13:49.616419    3340 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0923 12:13:49.624338    3340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 12:13:49.652236    3340 ssh_runner.go:195] Run: which lz4
	I0923 12:13:49.656985    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 12:13:49.664592    3340 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 12:13:49.670654    3340 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 12:13:49.671659    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0923 12:13:51.321946    3340 docker.go:649] duration metric: took 1.6648483s to copy over tarball
	I0923 12:13:51.329495    3340 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 12:13:59.788772    3340 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4587063s)
	I0923 12:13:59.788918    3340 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 12:13:59.853820    3340 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 12:13:59.870819    3340 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0923 12:13:59.910818    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:00.085897    3340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:14:03.345536    3340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2592847s)
	I0923 12:14:03.356190    3340 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:14:03.379957    3340 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 12:14:03.380036    3340 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:14:03.380036    3340 kubeadm.go:934] updating node { 172.19.146.194 8443 v1.31.1 docker true true} ...
	I0923 12:14:03.380151    3340 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.146.194
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:14:03.387851    3340 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 12:14:03.444198    3340 cni.go:84] Creating CNI manager for ""
	I0923 12:14:03.444198    3340 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:14:03.444198    3340 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:14:03.444198    3340 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.146.194 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565300 NodeName:ha-565300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.146.194"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.146.194 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:14:03.444198    3340 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.146.194
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-565300"
	  kubeletExtraArgs:
	    node-ip: 172.19.146.194
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.146.194"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:14:03.444198    3340 kube-vip.go:115] generating kube-vip config ...
	I0923 12:14:03.453177    3340 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:14:03.474900    3340 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:14:03.475151    3340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:14:03.484633    3340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:14:03.504138    3340 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:14:03.511511    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0923 12:14:03.526318    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0923 12:14:03.551491    3340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:14:03.575933    3340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0923 12:14:03.600955    3340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0923 12:14:03.641331    3340 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:14:03.647196    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:14:03.674201    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:14:03.836279    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:14:03.862792    3340 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300 for IP: 172.19.146.194
	I0923 12:14:03.862889    3340 certs.go:194] generating shared ca certs ...
	I0923 12:14:03.862889    3340 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:03.863803    3340 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 12:14:03.864593    3340 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 12:14:03.864593    3340 certs.go:256] generating profile certs ...
	I0923 12:14:03.865564    3340 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key
	I0923 12:14:03.865632    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.crt with IP's: []
	I0923 12:14:04.034779    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.crt ...
	I0923 12:14:04.034779    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.crt: {Name:mk0eabf58bc28b7e88916d61fb2acdce8c8c3d40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.036783    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key ...
	I0923 12:14:04.036783    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key: {Name:mkb5e6f177eab2a657ef89ec7acff0020110aa26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.037791    3340 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.e4c40462
	I0923 12:14:04.037791    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.e4c40462 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.146.194 172.19.159.254]
	I0923 12:14:04.247095    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.e4c40462 ...
	I0923 12:14:04.247095    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.e4c40462: {Name:mk721a003060e4989528317e20d96954efec0127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.249171    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.e4c40462 ...
	I0923 12:14:04.249171    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.e4c40462: {Name:mkdccd811da24fa2e143615d68ba9562a3f3cdb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.250529    3340 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.e4c40462 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt
	I0923 12:14:04.263847    3340 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.e4c40462 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key
	I0923 12:14:04.266370    3340 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key
	I0923 12:14:04.266370    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt with IP's: []
	I0923 12:14:04.415661    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt ...
	I0923 12:14:04.415661    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt: {Name:mk7ac3327e52fa143763dcdc0dbe2ce5fae95d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.417193    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key ...
	I0923 12:14:04.417193    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key: {Name:mk6acac745e732e2160ab3ac3ed54a7d89e8268a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:04.417444    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:14:04.418424    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:14:04.418736    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:14:04.419011    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:14:04.419251    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:14:04.419495    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:14:04.419581    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:14:04.427624    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:14:04.428632    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 12:14:04.429022    3340 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 12:14:04.429022    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 12:14:04.429220    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 12:14:04.429473    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 12:14:04.429670    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 12:14:04.429670    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 12:14:04.429670    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:04.429670    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 12:14:04.429670    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 12:14:04.431901    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:14:04.473185    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 12:14:04.511961    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:14:04.551336    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:14:04.589323    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 12:14:04.632564    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:14:04.671881    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:14:04.718355    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:14:04.757603    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:14:04.800716    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 12:14:04.845540    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 12:14:04.886961    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:14:04.921829    3340 ssh_runner.go:195] Run: openssl version
	I0923 12:14:04.939061    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:14:04.962860    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:04.969622    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:04.981633    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:14:04.999213    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:14:05.022285    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 12:14:05.048613    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 12:14:05.055517    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 12:14:05.064282    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 12:14:05.080660    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 12:14:05.105258    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 12:14:05.133343    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 12:14:05.139063    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 12:14:05.146734    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 12:14:05.161913    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:14:05.186221    3340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:14:05.192271    3340 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:14:05.192271    3340 kubeadm.go:392] StartCluster: {Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:14:05.203356    3340 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 12:14:05.233359    3340 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:14:05.255933    3340 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:14:05.280795    3340 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:14:05.295765    3340 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:14:05.295765    3340 kubeadm.go:157] found existing configuration files:
	
	I0923 12:14:05.306772    3340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:14:05.320875    3340 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:14:05.327388    3340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:14:05.349632    3340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:14:05.362718    3340 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:14:05.370850    3340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:14:05.398148    3340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:14:05.412983    3340 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:14:05.422172    3340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:14:05.445948    3340 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:14:05.460862    3340 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:14:05.470799    3340 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:14:05.485970    3340 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 12:14:05.683113    3340 kubeadm.go:310] W0923 12:14:05.893800    1762 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:14:05.684872    3340 kubeadm.go:310] W0923 12:14:05.895003    1762 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:14:05.803915    3340 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 12:14:17.868487    3340 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 12:14:17.868749    3340 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 12:14:17.868859    3340 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 12:14:17.869156    3340 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 12:14:17.869459    3340 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 12:14:17.869749    3340 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:14:17.872183    3340 out.go:235]   - Generating certificates and keys ...
	I0923 12:14:17.872822    3340 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 12:14:17.873082    3340 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 12:14:17.873082    3340 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 12:14:17.873082    3340 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 12:14:17.873082    3340 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 12:14:17.873607    3340 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 12:14:17.873838    3340 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 12:14:17.874064    3340 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-565300 localhost] and IPs [172.19.146.194 127.0.0.1 ::1]
	I0923 12:14:17.874231    3340 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 12:14:17.874475    3340 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-565300 localhost] and IPs [172.19.146.194 127.0.0.1 ::1]
	I0923 12:14:17.874642    3340 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 12:14:17.874781    3340 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 12:14:17.874863    3340 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 12:14:17.874930    3340 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:14:17.875064    3340 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:14:17.875330    3340 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:14:17.875399    3340 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:14:17.875638    3340 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:14:17.875638    3340 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:14:17.875638    3340 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:14:17.875638    3340 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:14:17.879172    3340 out.go:235]   - Booting up control plane ...
	I0923 12:14:17.879172    3340 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:14:17.879172    3340 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:14:17.879172    3340 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:14:17.879172    3340 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:14:17.880176    3340 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:14:17.880248    3340 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 12:14:17.880248    3340 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 12:14:17.880248    3340 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 12:14:17.880248    3340 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.078066ms
	I0923 12:14:17.881007    3340 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 12:14:17.881130    3340 kubeadm.go:310] [api-check] The API server is healthy after 7.002449166s
	I0923 12:14:17.881406    3340 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 12:14:17.881662    3340 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 12:14:17.881786    3340 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 12:14:17.882206    3340 kubeadm.go:310] [mark-control-plane] Marking the node ha-565300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 12:14:17.882206    3340 kubeadm.go:310] [bootstrap-token] Using token: w22tpi.aqmh61cssdet6ypg
	I0923 12:14:17.884602    3340 out.go:235]   - Configuring RBAC rules ...
	I0923 12:14:17.884602    3340 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 12:14:17.884602    3340 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 12:14:17.885197    3340 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 12:14:17.885197    3340 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 12:14:17.885798    3340 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 12:14:17.885863    3340 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 12:14:17.885863    3340 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 12:14:17.885863    3340 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 12:14:17.886404    3340 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 12:14:17.886404    3340 kubeadm.go:310] 
	I0923 12:14:17.886529    3340 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 12:14:17.886578    3340 kubeadm.go:310] 
	I0923 12:14:17.886643    3340 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 12:14:17.886643    3340 kubeadm.go:310] 
	I0923 12:14:17.886643    3340 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 12:14:17.886643    3340 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 12:14:17.887243    3340 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 12:14:17.887243    3340 kubeadm.go:310] 
	I0923 12:14:17.887243    3340 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 12:14:17.887243    3340 kubeadm.go:310] 
	I0923 12:14:17.887243    3340 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 12:14:17.887243    3340 kubeadm.go:310] 
	I0923 12:14:17.887243    3340 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 12:14:17.887243    3340 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 12:14:17.887243    3340 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 12:14:17.887784    3340 kubeadm.go:310] 
	I0923 12:14:17.887819    3340 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 12:14:17.887819    3340 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 12:14:17.887819    3340 kubeadm.go:310] 
	I0923 12:14:17.887819    3340 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w22tpi.aqmh61cssdet6ypg \
	I0923 12:14:17.888728    3340 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 \
	I0923 12:14:17.888792    3340 kubeadm.go:310] 	--control-plane 
	I0923 12:14:17.888849    3340 kubeadm.go:310] 
	I0923 12:14:17.888974    3340 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 12:14:17.889032    3340 kubeadm.go:310] 
	I0923 12:14:17.889136    3340 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w22tpi.aqmh61cssdet6ypg \
	I0923 12:14:17.889406    3340 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 12:14:17.889468    3340 cni.go:84] Creating CNI manager for ""
	I0923 12:14:17.889527    3340 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 12:14:17.897652    3340 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 12:14:17.908154    3340 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 12:14:17.916478    3340 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 12:14:17.916478    3340 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 12:14:17.965288    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 12:14:18.461915    3340 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:14:18.474690    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565300 minikube.k8s.io/updated_at=2024_09_23T12_14_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-565300 minikube.k8s.io/primary=true
	I0923 12:14:18.474690    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:18.515581    3340 ops.go:34] apiserver oom_adj: -16
	I0923 12:14:18.737205    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:19.239531    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:19.739080    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:20.237271    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:20.739765    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:21.239513    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:21.738315    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:22.240571    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:22.738499    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 12:14:22.856080    3340 kubeadm.go:1113] duration metric: took 4.3938678s to wait for elevateKubeSystemPrivileges
	I0923 12:14:22.856080    3340 kubeadm.go:394] duration metric: took 17.6626169s to StartCluster
	I0923 12:14:22.856080    3340 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:22.856080    3340 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:14:22.857047    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:14:22.858063    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 12:14:22.858063    3340 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:14:22.859047    3340 start.go:241] waiting for startup goroutines ...
	I0923 12:14:22.858063    3340 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:14:22.859047    3340 addons.go:69] Setting storage-provisioner=true in profile "ha-565300"
	I0923 12:14:22.859047    3340 addons.go:234] Setting addon storage-provisioner=true in "ha-565300"
	I0923 12:14:22.859047    3340 addons.go:69] Setting default-storageclass=true in profile "ha-565300"
	I0923 12:14:22.859047    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:14:22.859047    3340 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565300"
	I0923 12:14:22.859047    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:14:22.860051    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:22.860051    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:23.033456    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 12:14:23.390318    3340 start.go:971] {"host.minikube.internal": 172.19.144.1} host record injected into CoreDNS's ConfigMap
	I0923 12:14:24.949642    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:24.949642    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:24.949642    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:24.949642    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:24.951392    3340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:14:24.952215    3340 kapi.go:59] client config for ha-565300: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 12:14:24.953797    3340 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 12:14:24.954296    3340 addons.go:234] Setting addon default-storageclass=true in "ha-565300"
	I0923 12:14:24.954373    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:14:24.954455    3340 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:14:24.955415    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:24.956883    3340 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:14:24.956883    3340 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 12:14:24.956883    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:26.988410    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:26.988410    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:26.989075    3340 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 12:14:26.989075    3340 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 12:14:26.989202    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:14:27.139553    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:27.139553    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:27.139553    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:14:29.054903    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:14:29.054903    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:29.055017    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:14:29.502669    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:14:29.503094    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:29.503525    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:14:29.641979    3340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 12:14:30.824370    3340 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.1823118s)
	I0923 12:14:31.376452    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:14:31.377290    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:31.377660    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:14:31.497740    3340 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 12:14:31.634571    3340 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 12:14:31.634571    3340 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 12:14:31.634789    3340 round_trippers.go:463] GET https://172.19.159.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 12:14:31.634857    3340 round_trippers.go:469] Request Headers:
	I0923 12:14:31.634912    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:14:31.634912    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:14:31.648620    3340 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0923 12:14:31.650596    3340 round_trippers.go:463] PUT https://172.19.159.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 12:14:31.650596    3340 round_trippers.go:469] Request Headers:
	I0923 12:14:31.650596    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:14:31.650596    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:14:31.650596    3340 round_trippers.go:473]     Content-Type: application/json
	I0923 12:14:31.654188    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:14:31.658559    3340 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 12:14:31.663195    3340 addons.go:510] duration metric: took 8.8045375s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 12:14:31.663195    3340 start.go:246] waiting for cluster config update ...
	I0923 12:14:31.663195    3340 start.go:255] writing updated cluster config ...
	I0923 12:14:31.666015    3340 out.go:201] 
	I0923 12:14:31.677945    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:14:31.678072    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:14:31.682512    3340 out.go:177] * Starting "ha-565300-m02" control-plane node in "ha-565300" cluster
	I0923 12:14:31.689670    3340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:14:31.689670    3340 cache.go:56] Caching tarball of preloaded images
	I0923 12:14:31.689670    3340 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:14:31.689670    3340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:14:31.689670    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:14:31.691637    3340 start.go:360] acquireMachinesLock for ha-565300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:14:31.691637    3340 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-565300-m02"
	I0923 12:14:31.692634    3340 start.go:93] Provisioning new machine with config: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:14:31.692634    3340 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0923 12:14:31.697650    3340 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:14:31.697650    3340 start.go:159] libmachine.API.Create for "ha-565300" (driver="hyperv")
	I0923 12:14:31.697650    3340 client.go:168] LocalClient.Create starting
	I0923 12:14:31.697650    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:14:31.698634    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:14:31.698634    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 12:14:33.393825    3340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 12:14:33.393825    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:33.394161    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 12:14:34.890523    3340 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 12:14:34.890596    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:34.890665    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:14:36.204595    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:14:36.204595    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:36.205483    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:14:39.318731    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:14:39.319414    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:39.321702    3340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:14:39.699798    3340 main.go:141] libmachine: Creating SSH key...
	I0923 12:14:39.810406    3340 main.go:141] libmachine: Creating VM...
	I0923 12:14:39.810406    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:14:42.290510    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:14:42.290510    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:42.290510    3340 main.go:141] libmachine: Using switch "Default Switch"
	I0923 12:14:42.290510    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:14:43.810395    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:14:43.810395    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:43.810395    3340 main.go:141] libmachine: Creating VHD
	I0923 12:14:43.810395    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 12:14:47.241762    3340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : B2171BFB-757A-4D97-9114-8CA0521DECDD
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 12:14:47.241762    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:47.241762    3340 main.go:141] libmachine: Writing magic tar header
	I0923 12:14:47.241946    3340 main.go:141] libmachine: Writing SSH key tar header
	I0923 12:14:47.251054    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 12:14:50.127124    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:14:50.127124    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:50.127124    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\disk.vhd' -SizeBytes 20000MB
	I0923 12:14:52.372418    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:14:52.372418    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:52.372515    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-565300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0923 12:14:55.561341    3340 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-565300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 12:14:55.561341    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:55.561341    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-565300-m02 -DynamicMemoryEnabled $false
	I0923 12:14:57.514586    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:14:57.514586    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:57.514586    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-565300-m02 -Count 2
	I0923 12:14:59.371303    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:14:59.371303    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:14:59.371303    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-565300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\boot2docker.iso'
	I0923 12:15:01.620130    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:01.620232    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:01.620232    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-565300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\disk.vhd'
	I0923 12:15:03.988986    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:03.988986    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:03.988986    3340 main.go:141] libmachine: Starting VM...
	I0923 12:15:03.990042    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-565300-m02
	I0923 12:15:06.810744    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:06.811779    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:06.811803    3340 main.go:141] libmachine: Waiting for host to start...
	I0923 12:15:06.811968    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:08.834938    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:08.834938    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:08.834938    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:11.073616    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:11.074308    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:12.074445    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:14.037868    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:14.037868    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:14.037868    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:16.335915    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:16.335915    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:17.336652    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:19.270193    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:19.270922    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:19.271002    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:21.529156    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:21.529156    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:22.529767    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:24.550500    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:24.550500    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:24.550500    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:26.814416    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:15:26.814727    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:27.815707    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:29.813008    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:29.813764    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:29.813914    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:32.106185    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:32.106709    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:32.106709    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:34.030921    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:34.030921    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:34.030921    3340 machine.go:93] provisionDockerMachine start ...
	I0923 12:15:34.031043    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:35.921907    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:35.921958    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:35.921958    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:38.228217    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:38.228217    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:38.232324    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:15:38.244886    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:15:38.244886    3340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:15:38.381661    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:15:38.381661    3340 buildroot.go:166] provisioning hostname "ha-565300-m02"
	I0923 12:15:38.381661    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:40.290359    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:40.290359    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:40.291025    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:42.505424    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:42.505424    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:42.510206    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:15:42.510491    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:15:42.510491    3340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565300-m02 && echo "ha-565300-m02" | sudo tee /etc/hostname
	I0923 12:15:42.672016    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565300-m02
	
	I0923 12:15:42.672051    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:44.502023    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:44.502023    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:44.502093    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:46.766612    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:46.766612    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:46.770837    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:15:46.771025    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:15:46.771025    3340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:15:46.927497    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:15:46.927497    3340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 12:15:46.927615    3340 buildroot.go:174] setting up certificates
	I0923 12:15:46.927615    3340 provision.go:84] configureAuth start
	I0923 12:15:46.927673    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:48.805075    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:48.805122    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:48.805122    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:50.983284    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:50.983284    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:50.983363    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:52.836297    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:52.836297    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:52.836798    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:55.084445    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:55.085383    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:55.085383    3340 provision.go:143] copyHostCerts
	I0923 12:15:55.085519    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 12:15:55.085724    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:15:55.085724    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 12:15:55.086093    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 12:15:55.086951    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 12:15:55.087140    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:15:55.087140    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 12:15:55.087372    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 12:15:55.087603    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 12:15:55.088230    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:15:55.088230    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 12:15:55.088597    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:15:55.089352    3340 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-565300-m02 san=[127.0.0.1 172.19.154.133 ha-565300-m02 localhost minikube]
	I0923 12:15:55.237599    3340 provision.go:177] copyRemoteCerts
	I0923 12:15:55.245799    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:15:55.245799    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:15:57.143033    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:15:57.143374    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:57.143374    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:15:59.417727    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:15:59.417727    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:15:59.418221    3340 sshutil.go:53] new ssh client: &{IP:172.19.154.133 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:15:59.523230    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.277142s)
	I0923 12:15:59.523230    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 12:15:59.523856    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:15:59.568229    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 12:15:59.568578    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:15:59.619610    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 12:15:59.620175    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 12:15:59.680492    3340 provision.go:87] duration metric: took 12.7520167s to configureAuth
	I0923 12:15:59.680492    3340 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:15:59.681114    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:15:59.681114    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:01.514582    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:01.514582    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:01.514582    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:03.756953    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:03.756953    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:03.761436    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:03.761513    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:03.761513    3340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:16:03.913754    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:16:03.913754    3340 buildroot.go:70] root file system type: tmpfs
	I0923 12:16:03.913977    3340 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:16:03.913977    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:05.738270    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:05.738790    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:05.738870    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:07.938357    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:07.938357    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:07.944182    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:07.944447    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:07.944447    3340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.146.194"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:16:08.114661    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.146.194
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:16:08.114661    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:09.917150    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:09.917150    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:09.917150    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:12.084333    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:12.084333    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:12.088083    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:12.088736    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:12.088736    3340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:16:14.229491    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:16:14.229547    3340 machine.go:96] duration metric: took 40.1959123s to provisionDockerMachine
	I0923 12:16:14.229547    3340 client.go:171] duration metric: took 1m42.5249759s to LocalClient.Create
	I0923 12:16:14.229609    3340 start.go:167] duration metric: took 1m42.5250373s to libmachine.API.Create "ha-565300"
	I0923 12:16:14.229609    3340 start.go:293] postStartSetup for "ha-565300-m02" (driver="hyperv")
	I0923 12:16:14.229609    3340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:16:14.237881    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:16:14.237881    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:16.067841    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:16.068798    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:16.068798    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:18.256548    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:18.256548    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:18.257375    3340 sshutil.go:53] new ssh client: &{IP:172.19.154.133 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:16:18.361095    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1229356s)
	I0923 12:16:18.370219    3340 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:16:18.376220    3340 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:16:18.376220    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 12:16:18.376220    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 12:16:18.377262    3340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 12:16:18.377262    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 12:16:18.385398    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:16:18.402430    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 12:16:18.444430    3340 start.go:296] duration metric: took 4.214537s for postStartSetup
	I0923 12:16:18.445739    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:20.267780    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:20.268197    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:20.268271    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:22.464309    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:22.464309    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:22.464712    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:16:22.465968    3340 start.go:128] duration metric: took 1m50.765856s to createHost
	I0923 12:16:22.466486    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:24.341673    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:24.341731    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:24.341731    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:26.565660    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:26.565660    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:26.569433    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:26.570037    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:26.570037    3340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:16:26.705309    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727093786.909468732
	
	I0923 12:16:26.705344    3340 fix.go:216] guest clock: 1727093786.909468732
	I0923 12:16:26.705344    3340 fix.go:229] Guest: 2024-09-23 12:16:26.909468732 +0000 UTC Remote: 2024-09-23 12:16:22.465968 +0000 UTC m=+289.539245301 (delta=4.443500732s)
	I0923 12:16:26.705406    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:28.568250    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:28.568250    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:28.568250    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:30.803092    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:30.803092    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:30.809178    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:16:30.809926    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.154.133 22 <nil> <nil>}
	I0923 12:16:30.809926    3340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727093786
	I0923 12:16:30.958623    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 12:16:26 UTC 2024
	
	I0923 12:16:30.958623    3340 fix.go:236] clock set: Mon Sep 23 12:16:26 UTC 2024
	 (err=<nil>)
	I0923 12:16:30.958623    3340 start.go:83] releasing machines lock for "ha-565300-m02", held for 1m59.2589358s
	I0923 12:16:30.959627    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:32.821202    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:32.821202    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:32.821278    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:35.073211    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:35.073854    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:35.077287    3340 out.go:177] * Found network options:
	I0923 12:16:35.080813    3340 out.go:177]   - NO_PROXY=172.19.146.194
	W0923 12:16:35.083035    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:16:35.084591    3340 out.go:177]   - NO_PROXY=172.19.146.194
	W0923 12:16:35.087423    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:16:35.089375    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:16:35.091192    3340 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 12:16:35.091333    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:35.098072    3340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:16:35.098630    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:16:37.015570    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:37.015570    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:37.016469    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:37.017547    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:37.017547    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:37.017547    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:39.297171    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:39.297171    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:39.297171    3340 sshutil.go:53] new ssh client: &{IP:172.19.154.133 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:16:39.320058    3340 main.go:141] libmachine: [stdout =====>] : 172.19.154.133
	
	I0923 12:16:39.320058    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:39.320670    3340 sshutil.go:53] new ssh client: &{IP:172.19.154.133 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m02\id_rsa Username:docker}
	I0923 12:16:39.400156    3340 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3085541s)
	W0923 12:16:39.400235    3340 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 12:16:39.416960    3340 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3185965s)
	W0923 12:16:39.416960    3340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:16:39.427889    3340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:16:39.454650    3340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:16:39.454650    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:16:39.454817    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:16:39.498224    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0923 12:16:39.519785    3340 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 12:16:39.519785    3340 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 12:16:39.527007    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:16:39.546876    3340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:16:39.555504    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:16:39.584822    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:16:39.617002    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:16:39.648736    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:16:39.677067    3340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:16:39.705842    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:16:39.732897    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:16:39.766501    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:16:39.794268    3340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:16:39.811048    3340 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:16:39.821149    3340 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:16:39.851256    3340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:16:39.875256    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:40.056862    3340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:16:40.087721    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:16:40.098089    3340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:16:40.129191    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:16:40.166235    3340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:16:40.215789    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:16:40.246795    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:16:40.279681    3340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:16:40.336665    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:16:40.359400    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:16:40.411137    3340 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:16:40.425888    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:16:40.443058    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:16:40.483337    3340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:16:40.663725    3340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:16:40.835255    3340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:16:40.835373    3340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:16:40.882912    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:41.068033    3340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:16:43.616134    3340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5469046s)
	I0923 12:16:43.627763    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:16:43.656432    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:16:43.686542    3340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:16:43.880055    3340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:16:44.044080    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:44.216298    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:16:44.251908    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:16:44.282039    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:44.462138    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:16:44.560104    3340 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:16:44.569076    3340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:16:44.576755    3340 start.go:563] Will wait 60s for crictl version
	I0923 12:16:44.584435    3340 ssh_runner.go:195] Run: which crictl
	I0923 12:16:44.602392    3340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:16:44.652255    3340 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:16:44.658900    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:16:44.693724    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:16:44.726505    3340 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:16:44.729492    3340 out.go:177]   - env NO_PROXY=172.19.146.194
	I0923 12:16:44.732487    3340 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 12:16:44.734497    3340 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 12:16:44.734497    3340 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 12:16:44.736383    3340 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 12:16:44.736383    3340 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 12:16:44.739193    3340 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 12:16:44.739286    3340 ip.go:214] interface addr: 172.19.144.1/20
	I0923 12:16:44.748307    3340 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 12:16:44.754987    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:16:44.776904    3340 mustload.go:65] Loading cluster: ha-565300
	I0923 12:16:44.777295    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:16:44.777900    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:16:46.603753    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:46.604398    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:46.604398    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:16:46.604855    3340 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300 for IP: 172.19.154.133
	I0923 12:16:46.604855    3340 certs.go:194] generating shared ca certs ...
	I0923 12:16:46.604855    3340 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:16:46.605628    3340 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 12:16:46.605628    3340 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 12:16:46.605628    3340 certs.go:256] generating profile certs ...
	I0923 12:16:46.606492    3340 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key
	I0923 12:16:46.606552    3340 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.d6e24336
	I0923 12:16:46.606552    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.d6e24336 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.146.194 172.19.154.133 172.19.159.254]
	I0923 12:16:46.702433    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.d6e24336 ...
	I0923 12:16:46.702433    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.d6e24336: {Name:mkf65afc351c4cfc9398fe8eef0be9bde7269a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:16:46.703961    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.d6e24336 ...
	I0923 12:16:46.703961    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.d6e24336: {Name:mkfaaf958dc4b0425649b8bb0994634b6b271bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:16:46.705334    3340 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.d6e24336 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt
	I0923 12:16:46.720514    3340 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.d6e24336 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key
	I0923 12:16:46.721504    3340 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key
	I0923 12:16:46.721504    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:16:46.721676    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:16:46.721890    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:16:46.721890    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:16:46.721890    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:16:46.721890    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:16:46.722520    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:16:46.723162    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:16:46.723652    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 12:16:46.724002    3340 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 12:16:46.724090    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 12:16:46.724223    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 12:16:46.724223    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 12:16:46.724742    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 12:16:46.724992    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 12:16:46.724992    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 12:16:46.724992    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:16:46.725593    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 12:16:46.725799    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:16:48.599986    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:48.599986    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:48.600086    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:50.832198    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:16:50.832198    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:50.832512    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:16:50.931245    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:16:50.938436    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:16:50.964515    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:16:50.970662    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 12:16:50.999503    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:16:51.006390    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:16:51.033869    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:16:51.047052    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:16:51.075869    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:16:51.085065    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:16:51.111808    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:16:51.118184    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0923 12:16:51.136209    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:16:51.183105    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 12:16:51.227159    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:16:51.269252    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:16:51.312155    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0923 12:16:51.354245    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:16:51.400171    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:16:51.441761    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:16:51.493519    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 12:16:51.534979    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:16:51.579338    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 12:16:51.622021    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:16:51.653903    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 12:16:51.683130    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:16:51.713551    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:16:51.743561    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:16:51.773348    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0923 12:16:51.804688    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:16:51.840803    3340 ssh_runner.go:195] Run: openssl version
	I0923 12:16:51.856691    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 12:16:51.884344    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 12:16:51.890462    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 12:16:51.899296    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 12:16:51.916454    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:16:51.942320    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:16:51.970014    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:16:51.977814    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:16:51.986251    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:16:52.002533    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:16:52.032169    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 12:16:52.058120    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 12:16:52.064911    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 12:16:52.077167    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 12:16:52.093304    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 12:16:52.122180    3340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:16:52.129371    3340 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:16:52.129743    3340 kubeadm.go:934] updating node {m02 172.19.154.133 8443 v1.31.1 docker true true} ...
	I0923 12:16:52.129957    3340 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.154.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:16:52.130073    3340 kube-vip.go:115] generating kube-vip config ...
	I0923 12:16:52.139733    3340 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:16:52.167402    3340 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:16:52.167681    3340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:16:52.178341    3340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:16:52.199159    3340 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:16:52.209676    3340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:16:52.228973    3340 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl
	I0923 12:16:52.229254    3340 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet
	I0923 12:16:52.229295    3340 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm
	I0923 12:16:53.262831    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:16:53.270837    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:16:53.278585    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:16:53.279074    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:16:53.295258    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:16:53.304355    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:16:53.378529    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:16:53.378986    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:16:53.404280    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:16:53.457167    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:16:53.466118    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:16:53.491705    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:16:53.491841    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:16:54.397217    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:16:54.413473    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0923 12:16:54.442021    3340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:16:54.470289    3340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:16:54.508205    3340 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:16:54.515096    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:16:54.545175    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:16:54.730833    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:16:54.756927    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:16:54.757398    3340 start.go:317] joinCluster: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:16:54.757398    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:16:54.757398    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:16:56.573976    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:16:56.573976    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:56.574533    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:16:58.807739    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:16:58.807739    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:16:58.808354    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:16:59.140017    3340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3822165s)
	I0923 12:16:59.140166    3340 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:16:59.140252    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 37697z.9t4d8g449fg2twj4 --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-565300-m02 --control-plane --apiserver-advertise-address=172.19.154.133 --apiserver-bind-port=8443"
	I0923 12:17:42.171741    3340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 37697z.9t4d8g449fg2twj4 --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-565300-m02 --control-plane --apiserver-advertise-address=172.19.154.133 --apiserver-bind-port=8443": (43.0285844s)
	I0923 12:17:42.171741    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:17:42.897277    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565300-m02 minikube.k8s.io/updated_at=2024_09_23T12_17_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-565300 minikube.k8s.io/primary=false
	I0923 12:17:43.084341    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565300-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:17:43.228725    3340 start.go:319] duration metric: took 48.4680554s to joinCluster
	I0923 12:17:43.228914    3340 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:17:43.229664    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:17:43.231397    3340 out.go:177] * Verifying Kubernetes components...
	I0923 12:17:43.241851    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:17:43.537872    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:17:43.558676    3340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:17:43.559325    3340 kapi.go:59] client config for ha-565300: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:17:43.559455    3340 kubeadm.go:483] Overriding stale ClientConfig host https://172.19.159.254:8443 with https://172.19.146.194:8443
	I0923 12:17:43.560194    3340 node_ready.go:35] waiting up to 6m0s for node "ha-565300-m02" to be "Ready" ...
	I0923 12:17:43.560444    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:43.560444    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:43.560444    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:43.560516    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:43.577509    3340 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0923 12:17:44.060739    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:44.060739    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:44.060739    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:44.060739    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:44.072511    3340 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:17:44.561244    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:44.561244    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:44.561244    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:44.561244    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:44.566688    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:45.060807    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:45.060807    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:45.060807    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:45.060807    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:45.067085    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:45.561172    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:45.561172    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:45.561172    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:45.561241    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:45.566161    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:45.567483    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:46.060823    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:46.060823    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:46.060823    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:46.060823    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:46.066825    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:46.561883    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:46.561883    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:46.561883    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:46.561883    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:46.565882    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:17:47.061585    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:47.061585    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:47.061585    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:47.061585    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:47.066987    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:47.561467    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:47.561566    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:47.561566    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:47.561566    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:47.568768    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:17:47.569637    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:48.061522    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:48.061522    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:48.061522    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:48.061522    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:48.067840    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:48.561001    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:48.561001    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:48.561001    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:48.561001    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:48.566004    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:49.062042    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:49.062105    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:49.062105    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:49.062185    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:49.067078    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:49.561008    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:49.561008    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:49.561008    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:49.561008    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:49.566769    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:50.061101    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:50.061101    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:50.061101    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:50.061101    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:50.315715    3340 round_trippers.go:574] Response Status: 200 OK in 254 milliseconds
	I0923 12:17:50.316728    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:50.562144    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:50.562144    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:50.562382    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:50.562382    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:50.568182    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:51.060904    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:51.060904    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:51.060904    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:51.060904    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:51.065697    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:51.561708    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:51.561708    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:51.561708    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:51.561708    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:51.568002    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:52.061018    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:52.061018    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:52.061018    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:52.061018    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:52.066813    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:52.560912    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:52.560912    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:52.560912    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:52.560912    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:52.566702    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:52.567605    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:53.061770    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:53.062111    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:53.062111    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:53.062111    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:53.067725    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:53.561053    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:53.561053    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:53.561053    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:53.561053    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:53.567504    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:17:54.061453    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:54.061453    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:54.061453    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:54.061453    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:54.066814    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:54.562027    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:54.562027    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:54.562027    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:54.562027    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:54.567759    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:54.568562    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:55.062024    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:55.062024    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:55.062024    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:55.062024    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:55.066868    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:55.562855    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:55.562855    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:55.562855    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:55.562855    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:55.567901    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:56.061841    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:56.061841    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:56.061841    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:56.061841    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:56.067589    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:56.561601    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:56.561601    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:56.561601    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:56.561601    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:56.565916    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:57.061770    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:57.061770    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:57.061770    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:57.061770    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:57.066170    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:57.066899    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:57.561592    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:57.561592    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:57.561592    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:57.561592    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:57.566974    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:58.061919    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:58.061919    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:58.061919    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:58.061919    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:58.066110    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:58.562293    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:58.562293    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:58.562293    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:58.562293    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:58.567761    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:17:59.062415    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:59.062415    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:59.062415    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:59.062415    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:59.067197    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:17:59.068062    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:17:59.561998    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:17:59.561998    3340 round_trippers.go:469] Request Headers:
	I0923 12:17:59.561998    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:17:59.561998    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:17:59.567039    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:00.061465    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:00.061465    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:00.061465    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:00.061465    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:00.067105    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:00.561586    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:00.561586    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:00.561586    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:00.561586    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:00.566690    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:01.061947    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:01.061947    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:01.061947    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:01.061947    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:01.068043    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:01.068778    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:18:01.561820    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:01.561820    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:01.561820    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:01.561820    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:01.567622    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:02.061943    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:02.061943    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:02.061943    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:02.061943    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:02.067765    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:02.562489    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:02.562489    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:02.562489    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:02.562489    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:02.574227    3340 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 12:18:03.062242    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:03.062242    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:03.062242    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:03.062242    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:03.068929    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:03.069243    3340 node_ready.go:53] node "ha-565300-m02" has status "Ready":"False"
	I0923 12:18:03.561767    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:03.561767    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:03.561767    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:03.561767    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:03.567315    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:04.062124    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:04.062124    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:04.062124    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:04.062124    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:04.068387    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:04.562176    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:04.562176    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:04.562176    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:04.562176    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:04.568967    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:05.063623    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:05.063717    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.063717    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.063792    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.072290    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:18:05.072849    3340 node_ready.go:49] node "ha-565300-m02" has status "Ready":"True"
	I0923 12:18:05.072849    3340 node_ready.go:38] duration metric: took 21.5111279s for node "ha-565300-m02" to be "Ready" ...
	I0923 12:18:05.072849    3340 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:18:05.072849    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:05.072849    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.072849    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.072849    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.080783    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:18:05.091165    3340 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.091165    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jzhc
	I0923 12:18:05.091165    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.091165    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.091165    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.096479    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:05.097204    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.097204    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.097204    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.097204    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.101339    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:18:05.102805    3340 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.102805    3340 pod_ready.go:82] duration metric: took 11.6387ms for pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.102805    3340 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.102805    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kf224
	I0923 12:18:05.102805    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.102805    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.102805    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.106401    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.107685    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.107773    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.107773    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.107773    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.111017    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.111498    3340 pod_ready.go:93] pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.111564    3340 pod_ready.go:82] duration metric: took 8.759ms for pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.111564    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.111631    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300
	I0923 12:18:05.111631    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.111631    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.111631    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.115083    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.115669    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.115724    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.115724    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.115724    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.119781    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:18:05.119781    3340 pod_ready.go:93] pod "etcd-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.119781    3340 pod_ready.go:82] duration metric: took 8.2163ms for pod "etcd-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.119781    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.120319    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300-m02
	I0923 12:18:05.120319    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.120319    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.120319    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.123494    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.125026    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:05.125026    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.125026    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.125026    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.128385    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.128879    3340 pod_ready.go:93] pod "etcd-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.128879    3340 pod_ready.go:82] duration metric: took 9.0971ms for pod "etcd-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.128970    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.262871    3340 request.go:632] Waited for 133.8513ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300
	I0923 12:18:05.263165    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300
	I0923 12:18:05.263165    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.263165    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.263165    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.267122    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:18:05.463099    3340 request.go:632] Waited for 195.162ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.463694    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:05.463694    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.463694    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.463799    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.468772    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:18:05.469264    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.469340    3340 pod_ready.go:82] duration metric: took 340.3475ms for pod "kube-apiserver-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.469340    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.663065    3340 request.go:632] Waited for 193.7112ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m02
	I0923 12:18:05.663418    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m02
	I0923 12:18:05.663418    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.663625    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.663625    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.671500    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:18:05.863124    3340 request.go:632] Waited for 190.7906ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:05.863124    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:05.863124    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:05.863124    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:05.863124    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:05.868637    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:05.868926    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:05.869452    3340 pod_ready.go:82] duration metric: took 399.5584ms for pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:05.869452    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.063339    3340 request.go:632] Waited for 193.7803ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300
	I0923 12:18:06.063339    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300
	I0923 12:18:06.063339    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.063339    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.063339    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.069782    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:06.263542    3340 request.go:632] Waited for 192.2584ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:06.263542    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:06.263542    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.263542    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.263542    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.269259    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:06.270238    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:06.270320    3340 pod_ready.go:82] duration metric: took 400.8413ms for pod "kube-controller-manager-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.270320    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.463597    3340 request.go:632] Waited for 193.0966ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m02
	I0923 12:18:06.463597    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m02
	I0923 12:18:06.463597    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.463597    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.463597    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.469696    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:06.663242    3340 request.go:632] Waited for 192.1872ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:06.663723    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:06.663723    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.663855    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.663855    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.668902    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:06.669957    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:06.670073    3340 pod_ready.go:82] duration metric: took 399.6444ms for pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.670141    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzwmh" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:06.862982    3340 request.go:632] Waited for 192.7033ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzwmh
	I0923 12:18:06.862982    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzwmh
	I0923 12:18:06.862982    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:06.862982    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:06.862982    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:06.869796    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:07.063178    3340 request.go:632] Waited for 192.5882ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:07.063178    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:07.063178    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.063178    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.063178    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.068322    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:07.068848    3340 pod_ready.go:93] pod "kube-proxy-jzwmh" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:07.069033    3340 pod_ready.go:82] duration metric: took 398.7827ms for pod "kube-proxy-jzwmh" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.069033    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s4s8g" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.263439    3340 request.go:632] Waited for 194.393ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s4s8g
	I0923 12:18:07.263439    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s4s8g
	I0923 12:18:07.263439    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.263439    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.263439    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.269141    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:07.463892    3340 request.go:632] Waited for 193.5763ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:07.463892    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:07.463892    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.463892    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.463892    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.469866    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:07.470189    3340 pod_ready.go:93] pod "kube-proxy-s4s8g" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:07.470189    3340 pod_ready.go:82] duration metric: took 401.1287ms for pod "kube-proxy-s4s8g" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.470189    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.663766    3340 request.go:632] Waited for 193.5638ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300
	I0923 12:18:07.663766    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300
	I0923 12:18:07.663766    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.663766    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.663766    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.668975    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:07.863375    3340 request.go:632] Waited for 193.5517ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:07.863375    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:18:07.863375    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:07.863897    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:07.863897    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:07.870426    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:18:07.871420    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:07.871500    3340 pod_ready.go:82] duration metric: took 401.2838ms for pod "kube-scheduler-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:07.871500    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:08.063739    3340 request.go:632] Waited for 192.1666ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m02
	I0923 12:18:08.063991    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m02
	I0923 12:18:08.063991    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.063991    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.063991    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.069332    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:08.263960    3340 request.go:632] Waited for 193.7897ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:08.263960    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:18:08.263960    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.263960    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.263960    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.269032    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:08.269582    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:18:08.269582    3340 pod_ready.go:82] duration metric: took 398.0547ms for pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:18:08.269582    3340 pod_ready.go:39] duration metric: took 3.1965166s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:18:08.269751    3340 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:18:08.277928    3340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:18:08.302019    3340 api_server.go:72] duration metric: took 25.0713421s to wait for apiserver process to appear ...
	I0923 12:18:08.302019    3340 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:18:08.302019    3340 api_server.go:253] Checking apiserver healthz at https://172.19.146.194:8443/healthz ...
	I0923 12:18:08.310467    3340 api_server.go:279] https://172.19.146.194:8443/healthz returned 200:
	ok
	I0923 12:18:08.310655    3340 round_trippers.go:463] GET https://172.19.146.194:8443/version
	I0923 12:18:08.310655    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.310655    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.310774    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.312478    3340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:18:08.312656    3340 api_server.go:141] control plane version: v1.31.1
	I0923 12:18:08.312697    3340 api_server.go:131] duration metric: took 10.6775ms to wait for apiserver health ...
	I0923 12:18:08.312697    3340 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:18:08.463914    3340 request.go:632] Waited for 151.1261ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:08.463914    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:08.463914    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.463914    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.463914    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.469905    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:08.477123    3340 system_pods.go:59] 17 kube-system pods found
	I0923 12:18:08.477123    3340 system_pods.go:61] "coredns-7c65d6cfc9-7jzhc" [3410fd4d-a455-48c7-a6c3-7b3af6aa50a6] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "coredns-7c65d6cfc9-kf224" [08055950-19ea-4d96-b610-ca1d025c25c2] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "etcd-ha-565300" [fa5fe799-27bb-442e-9093-70d1f91fd7f3] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "etcd-ha-565300-m02" [18c247e2-8721-4662-b8db-b9174e535412] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kindnet-gvvph" [c728d1b2-d98f-4947-a971-dca1b05ba54a] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kindnet-jcj4l" [e9f183eb-5b54-4852-a996-4b4ce9a938d9] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-apiserver-ha-565300" [89e33fd1-9346-4a7d-a6c2-37a1cc636b58] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-apiserver-ha-565300-m02" [8c350e1d-ee2d-4a80-8ed8-8140a2b2e660] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-controller-manager-ha-565300" [d4599166-8583-47c0-a3c8-dc8c28fac9a2] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-controller-manager-ha-565300-m02" [6f035dd0-acd5-4162-b0d1-f37dff03d62f] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-proxy-jzwmh" [335d0452-7c30-4fe2-b0bb-d79af97b1a2d] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-proxy-s4s8g" [85c46e0e-ab32-420e-a9b7-fee9d360c8ec] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-scheduler-ha-565300" [a9ea8c2a-bfe0-4c4d-9da8-fd3b48b518b1] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-scheduler-ha-565300-m02" [de3cea24-2ae5-4a8e-8dff-3baa6cbd136f] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-vip-ha-565300" [800f2b80-94bc-4068-86eb-95bc7d58cdd7] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "kube-vip-ha-565300-m02" [5a2386d6-9706-4c61-9e8a-b1a39838f0f9] Running
	I0923 12:18:08.477123    3340 system_pods.go:61] "storage-provisioner" [e8126304-9d6c-4f7f-ac79-f0bbf61690b3] Running
	I0923 12:18:08.477123    3340 system_pods.go:74] duration metric: took 164.4146ms to wait for pod list to return data ...
	I0923 12:18:08.477123    3340 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:18:08.663572    3340 request.go:632] Waited for 186.4367ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:18:08.663572    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:18:08.663572    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.663572    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.663572    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.668785    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:08.669270    3340 default_sa.go:45] found service account: "default"
	I0923 12:18:08.669351    3340 default_sa.go:55] duration metric: took 192.2154ms for default service account to be created ...
	I0923 12:18:08.669422    3340 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:18:08.863842    3340 request.go:632] Waited for 194.2973ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:08.863842    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:18:08.863842    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:08.863842    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:08.863842    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:08.872004    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:18:08.878357    3340 system_pods.go:86] 17 kube-system pods found
	I0923 12:18:08.878450    3340 system_pods.go:89] "coredns-7c65d6cfc9-7jzhc" [3410fd4d-a455-48c7-a6c3-7b3af6aa50a6] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "coredns-7c65d6cfc9-kf224" [08055950-19ea-4d96-b610-ca1d025c25c2] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "etcd-ha-565300" [fa5fe799-27bb-442e-9093-70d1f91fd7f3] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "etcd-ha-565300-m02" [18c247e2-8721-4662-b8db-b9174e535412] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kindnet-gvvph" [c728d1b2-d98f-4947-a971-dca1b05ba54a] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kindnet-jcj4l" [e9f183eb-5b54-4852-a996-4b4ce9a938d9] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-apiserver-ha-565300" [89e33fd1-9346-4a7d-a6c2-37a1cc636b58] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-apiserver-ha-565300-m02" [8c350e1d-ee2d-4a80-8ed8-8140a2b2e660] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-controller-manager-ha-565300" [d4599166-8583-47c0-a3c8-dc8c28fac9a2] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-controller-manager-ha-565300-m02" [6f035dd0-acd5-4162-b0d1-f37dff03d62f] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-proxy-jzwmh" [335d0452-7c30-4fe2-b0bb-d79af97b1a2d] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-proxy-s4s8g" [85c46e0e-ab32-420e-a9b7-fee9d360c8ec] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-scheduler-ha-565300" [a9ea8c2a-bfe0-4c4d-9da8-fd3b48b518b1] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-scheduler-ha-565300-m02" [de3cea24-2ae5-4a8e-8dff-3baa6cbd136f] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-vip-ha-565300" [800f2b80-94bc-4068-86eb-95bc7d58cdd7] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "kube-vip-ha-565300-m02" [5a2386d6-9706-4c61-9e8a-b1a39838f0f9] Running
	I0923 12:18:08.878450    3340 system_pods.go:89] "storage-provisioner" [e8126304-9d6c-4f7f-ac79-f0bbf61690b3] Running
	I0923 12:18:08.878450    3340 system_pods.go:126] duration metric: took 209.0142ms to wait for k8s-apps to be running ...
	I0923 12:18:08.878450    3340 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:18:08.890654    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:18:08.915347    3340 system_svc.go:56] duration metric: took 36.8942ms WaitForService to wait for kubelet
	I0923 12:18:08.915347    3340 kubeadm.go:582] duration metric: took 25.6846285s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:18:08.915409    3340 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:18:09.063228    3340 request.go:632] Waited for 147.7537ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes
	I0923 12:18:09.063573    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes
	I0923 12:18:09.063573    3340 round_trippers.go:469] Request Headers:
	I0923 12:18:09.063573    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:18:09.063573    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:18:09.069107    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:18:09.070175    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:18:09.070251    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:18:09.070251    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:18:09.070251    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:18:09.070251    3340 node_conditions.go:105] duration metric: took 154.8316ms to run NodePressure ...
	I0923 12:18:09.070251    3340 start.go:241] waiting for startup goroutines ...
	I0923 12:18:09.070325    3340 start.go:255] writing updated cluster config ...
	I0923 12:18:09.073500    3340 out.go:201] 
	I0923 12:18:09.091181    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:18:09.091411    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:18:09.100346    3340 out.go:177] * Starting "ha-565300-m03" control-plane node in "ha-565300" cluster
	I0923 12:18:09.103066    3340 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:18:09.103180    3340 cache.go:56] Caching tarball of preloaded images
	I0923 12:18:09.103555    3340 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:18:09.103555    3340 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:18:09.103555    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:18:09.107634    3340 start.go:360] acquireMachinesLock for ha-565300-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:18:09.107634    3340 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-565300-m03"
	I0923 12:18:09.108337    3340 start.go:93] Provisioning new machine with config: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:18:09.108475    3340 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0923 12:18:09.111519    3340 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 12:18:09.111519    3340 start.go:159] libmachine.API.Create for "ha-565300" (driver="hyperv")
	I0923 12:18:09.111519    3340 client.go:168] LocalClient.Create starting
	I0923 12:18:09.112535    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 12:18:09.112535    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:18:09.112535    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:18:09.113080    3340 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 12:18:09.113147    3340 main.go:141] libmachine: Decoding PEM data...
	I0923 12:18:09.113281    3340 main.go:141] libmachine: Parsing certificate...
	I0923 12:18:09.113379    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 12:18:10.806778    3340 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 12:18:10.806854    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:10.806919    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 12:18:12.315363    3340 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 12:18:12.315363    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:12.315527    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:18:13.630727    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:18:13.630727    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:13.630727    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:18:16.798971    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:18:16.798971    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:16.800962    3340 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 12:18:17.147222    3340 main.go:141] libmachine: Creating SSH key...
	I0923 12:18:17.265476    3340 main.go:141] libmachine: Creating VM...
	I0923 12:18:17.266481    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 12:18:19.812541    3340 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 12:18:19.812541    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:19.812541    3340 main.go:141] libmachine: Using switch "Default Switch"
	I0923 12:18:19.812541    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 12:18:21.392940    3340 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 12:18:21.393972    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:21.393972    3340 main.go:141] libmachine: Creating VHD
	I0923 12:18:21.394036    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 12:18:24.745970    3340 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 572D1E1F-CF72-433A-A3B1-2FCF56C6B5B3
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 12:18:24.747018    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:24.747018    3340 main.go:141] libmachine: Writing magic tar header
	I0923 12:18:24.747018    3340 main.go:141] libmachine: Writing SSH key tar header
	I0923 12:18:24.755422    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 12:18:27.655166    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:27.656236    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:27.656394    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\disk.vhd' -SizeBytes 20000MB
	I0923 12:18:29.926223    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:29.926223    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:29.926734    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-565300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0923 12:18:33.088698    3340 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-565300-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 12:18:33.089494    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:33.089494    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-565300-m03 -DynamicMemoryEnabled $false
	I0923 12:18:35.048041    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:35.048322    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:35.048322    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-565300-m03 -Count 2
	I0923 12:18:36.953873    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:36.954893    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:36.954893    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-565300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\boot2docker.iso'
	I0923 12:18:39.218869    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:39.218869    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:39.219752    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-565300-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\disk.vhd'
	I0923 12:18:41.528327    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:41.528327    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:41.528327    3340 main.go:141] libmachine: Starting VM...
	I0923 12:18:41.528327    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-565300-m03
	I0923 12:18:44.307967    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:44.308902    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:44.308902    3340 main.go:141] libmachine: Waiting for host to start...
	I0923 12:18:44.308902    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:18:46.280983    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:18:46.280983    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:46.280983    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:18:48.493926    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:48.493992    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:49.495241    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:18:51.475969    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:18:51.476065    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:51.476123    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:18:53.685779    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:53.685857    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:54.686645    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:18:56.619214    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:18:56.619783    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:56.619783    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:18:58.837535    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:18:58.837535    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:18:59.840742    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:01.788340    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:01.788340    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:01.789052    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:04.016339    3340 main.go:141] libmachine: [stdout =====>] : 
	I0923 12:19:04.016553    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:05.017603    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:06.960456    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:06.960456    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:06.960456    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:09.280732    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:09.280732    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:09.281551    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:11.246280    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:11.246392    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:11.246392    3340 machine.go:93] provisionDockerMachine start ...
	I0923 12:19:11.246392    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:13.168372    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:13.168937    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:13.168937    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:15.403239    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:15.403309    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:15.407625    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:15.417717    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:15.417717    3340 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:19:15.559478    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:19:15.559478    3340 buildroot.go:166] provisioning hostname "ha-565300-m03"
	I0923 12:19:15.559671    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:17.455996    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:17.456588    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:17.456687    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:19.706647    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:19.707409    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:19.712212    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:19.712212    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:19.712212    3340 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565300-m03 && echo "ha-565300-m03" | sudo tee /etc/hostname
	I0923 12:19:19.878313    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565300-m03
	
	I0923 12:19:19.878313    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:21.743553    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:21.743553    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:21.744571    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:24.002780    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:24.002780    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:24.007353    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:24.007779    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:24.007846    3340 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:19:24.161469    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:19:24.161469    3340 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 12:19:24.161469    3340 buildroot.go:174] setting up certificates
	I0923 12:19:24.161555    3340 provision.go:84] configureAuth start
	I0923 12:19:24.161618    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:26.033202    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:26.033202    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:26.034404    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:28.279801    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:28.280523    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:28.280603    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:30.124164    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:30.124221    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:30.124221    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:32.363027    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:32.363027    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:32.363594    3340 provision.go:143] copyHostCerts
	I0923 12:19:32.363594    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 12:19:32.363594    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:19:32.363594    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 12:19:32.364364    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 12:19:32.364982    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 12:19:32.364982    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:19:32.365507    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 12:19:32.365774    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:19:32.366371    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 12:19:32.366972    3340 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:19:32.366972    3340 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 12:19:32.366972    3340 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 12:19:32.367568    3340 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-565300-m03 san=[127.0.0.1 172.19.153.80 ha-565300-m03 localhost minikube]
	I0923 12:19:32.461119    3340 provision.go:177] copyRemoteCerts
	I0923 12:19:32.468103    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:19:32.468103    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:34.327901    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:34.327901    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:34.328031    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:36.527864    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:36.528523    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:36.528698    3340 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:19:36.640009    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1715198s)
	I0923 12:19:36.640054    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 12:19:36.640385    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:19:36.688237    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 12:19:36.688237    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 12:19:36.731763    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 12:19:36.732162    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:19:36.774326    3340 provision.go:87] duration metric: took 12.6119202s to configureAuth
	I0923 12:19:36.774326    3340 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:19:36.774326    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:19:36.774910    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:38.595929    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:38.595929    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:38.596125    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:40.807626    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:40.807626    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:40.811685    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:40.812105    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:40.812105    3340 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:19:40.951589    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:19:40.951589    3340 buildroot.go:70] root file system type: tmpfs
	I0923 12:19:40.952014    3340 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:19:40.952171    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:42.827426    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:42.828123    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:42.828256    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:45.073256    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:45.073985    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:45.077872    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:45.078346    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:45.078468    3340 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.146.194"
	Environment="NO_PROXY=172.19.146.194,172.19.154.133"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:19:45.253498    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.146.194
	Environment=NO_PROXY=172.19.146.194,172.19.154.133
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:19:45.253498    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:47.117473    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:47.118469    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:47.118545    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:49.384773    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:49.384773    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:49.388813    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:19:49.388866    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:19:49.388866    3340 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:19:51.540595    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:19:51.540661    3340 machine.go:96] duration metric: took 40.2915487s to provisionDockerMachine
	I0923 12:19:51.540661    3340 client.go:171] duration metric: took 1m42.4222277s to LocalClient.Create
	I0923 12:19:51.540661    3340 start.go:167] duration metric: took 1m42.4222277s to libmachine.API.Create "ha-565300"
	I0923 12:19:51.540661    3340 start.go:293] postStartSetup for "ha-565300-m03" (driver="hyperv")
	I0923 12:19:51.540749    3340 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:19:51.549547    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:19:51.549547    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:53.394973    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:53.394973    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:53.395892    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:55.636999    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:55.637068    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:55.637402    3340 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:19:55.749213    3340 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1993422s)
	I0923 12:19:55.757515    3340 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:19:55.764503    3340 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:19:55.764503    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 12:19:55.764923    3340 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 12:19:55.765445    3340 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 12:19:55.765584    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 12:19:55.774426    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:19:55.793854    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 12:19:55.837595    3340 start.go:296] duration metric: took 4.2965558s for postStartSetup
	I0923 12:19:55.841686    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:19:57.709548    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:19:57.709884    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:57.709884    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:19:59.944505    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:19:59.945506    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:19:59.945506    3340 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\config.json ...
	I0923 12:19:59.947235    3340 start.go:128] duration metric: took 1m50.8312785s to createHost
	I0923 12:19:59.947235    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:01.856105    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:01.856105    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:01.856759    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:04.194738    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:04.194823    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:04.198853    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:20:04.199376    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:20:04.199376    3340 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:20:04.334751    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727094004.542351523
	
	I0923 12:20:04.334751    3340 fix.go:216] guest clock: 1727094004.542351523
	I0923 12:20:04.334751    3340 fix.go:229] Guest: 2024-09-23 12:20:04.542351523 +0000 UTC Remote: 2024-09-23 12:19:59.9472359 +0000 UTC m=+507.005833201 (delta=4.595115623s)
	I0923 12:20:04.334751    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:06.276301    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:06.277770    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:06.277770    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:08.558784    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:08.559931    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:08.562949    3340 main.go:141] libmachine: Using SSH client type: native
	I0923 12:20:08.563579    3340 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.80 22 <nil> <nil>}
	I0923 12:20:08.563579    3340 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727094004
	I0923 12:20:08.707857    3340 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 12:20:04 UTC 2024
	
	I0923 12:20:08.707857    3340 fix.go:236] clock set: Mon Sep 23 12:20:04 UTC 2024
	 (err=<nil>)
	I0923 12:20:08.707857    3340 start.go:83] releasing machines lock for "ha-565300-m03", held for 1m59.591601s
	I0923 12:20:08.708378    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:10.620747    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:10.620747    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:10.620747    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:12.946580    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:12.946580    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:12.949414    3340 out.go:177] * Found network options:
	I0923 12:20:12.952175    3340 out.go:177]   - NO_PROXY=172.19.146.194,172.19.154.133
	W0923 12:20:12.954405    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:20:12.954405    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:20:12.956036    3340 out.go:177]   - NO_PROXY=172.19.146.194,172.19.154.133
	W0923 12:20:12.958721    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:20:12.958721    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:20:12.959772    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 12:20:12.959772    3340 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:20:12.960913    3340 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 12:20:12.960913    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:12.968277    3340 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:20:12.968277    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:20:14.944840    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:14.944939    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:14.944939    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:14.956615    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:14.956615    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:14.956615    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:17.359163    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:17.359163    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:17.360167    3340 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:20:17.386160    3340 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:20:17.386330    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:17.386330    3340 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:20:17.462087    3340 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5008696s)
	W0923 12:20:17.462087    3340 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 12:20:17.478325    3340 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5097443s)
	W0923 12:20:17.478325    3340 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:20:17.487385    3340 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:20:17.514409    3340 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:20:17.514447    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:20:17.514619    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0923 12:20:17.559304    3340 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 12:20:17.559304    3340 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 12:20:17.568053    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:20:17.601738    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:20:17.621081    3340 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:20:17.630019    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:20:17.657784    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:20:17.688995    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:20:17.717493    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:20:17.745220    3340 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:20:17.773855    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:20:17.801602    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:20:17.829091    3340 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:20:17.861027    3340 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:20:17.879158    3340 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:20:17.887698    3340 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:20:17.916111    3340 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:20:17.940130    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:18.136023    3340 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:20:18.167582    3340 start.go:495] detecting cgroup driver to use...
	I0923 12:20:18.177232    3340 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:20:18.209265    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:20:18.242269    3340 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:20:18.286260    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:20:18.319668    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:20:18.355651    3340 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:20:18.410807    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:20:18.435372    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:20:18.477132    3340 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:20:18.493549    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:20:18.510446    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:20:18.551428    3340 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:20:18.743277    3340 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:20:18.918896    3340 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:20:18.919074    3340 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:20:18.965505    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:19.162392    3340 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:20:21.747436    3340 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5848701s)
	I0923 12:20:21.759292    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:20:21.795623    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:20:21.828964    3340 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:20:22.021226    3340 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:20:22.216364    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:22.404607    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:20:22.443738    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:20:22.476532    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:22.678094    3340 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:20:22.799090    3340 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:20:22.807302    3340 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:20:22.817000    3340 start.go:563] Will wait 60s for crictl version
	I0923 12:20:22.825689    3340 ssh_runner.go:195] Run: which crictl
	I0923 12:20:22.840199    3340 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:20:22.902477    3340 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:20:22.909207    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:20:22.946030    3340 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:20:22.979838    3340 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:20:22.984316    3340 out.go:177]   - env NO_PROXY=172.19.146.194
	I0923 12:20:22.987777    3340 out.go:177]   - env NO_PROXY=172.19.146.194,172.19.154.133
	I0923 12:20:22.990038    3340 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 12:20:22.994890    3340 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 12:20:22.994890    3340 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 12:20:22.994890    3340 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 12:20:22.994890    3340 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 12:20:22.998413    3340 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 12:20:22.999120    3340 ip.go:214] interface addr: 172.19.144.1/20
	I0923 12:20:23.008285    3340 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 12:20:23.018330    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:20:23.039863    3340 mustload.go:65] Loading cluster: ha-565300
	I0923 12:20:23.040597    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:20:23.041140    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:20:24.958651    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:24.958651    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:24.958651    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:20:24.959748    3340 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300 for IP: 172.19.153.80
	I0923 12:20:24.959748    3340 certs.go:194] generating shared ca certs ...
	I0923 12:20:24.960270    3340 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:20:24.960611    3340 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 12:20:24.961228    3340 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 12:20:24.961228    3340 certs.go:256] generating profile certs ...
	I0923 12:20:24.961952    3340 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\client.key
	I0923 12:20:24.961952    3340 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.ca49bce9
	I0923 12:20:24.961952    3340 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.ca49bce9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.146.194 172.19.154.133 172.19.153.80 172.19.159.254]
	I0923 12:20:25.260128    3340 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.ca49bce9 ...
	I0923 12:20:25.260128    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.ca49bce9: {Name:mk79814649a4720b0ca874ac6d62fb512a44243f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:20:25.261138    3340 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.ca49bce9 ...
	I0923 12:20:25.261138    3340 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.ca49bce9: {Name:mk47d62a2b375e625148b664ca7055bc4683018c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:20:25.261536    3340 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt.ca49bce9 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt
	I0923 12:20:25.274507    3340 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key.ca49bce9 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key
	I0923 12:20:25.274823    3340 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key
	I0923 12:20:25.274823    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:20:25.274823    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:20:25.274823    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:20:25.274823    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:20:25.275844    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:20:25.275844    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:20:25.275844    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:20:25.275844    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:20:25.277195    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 12:20:25.277462    3340 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 12:20:25.277559    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 12:20:25.277736    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 12:20:25.278023    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 12:20:25.278177    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 12:20:25.278177    3340 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 12:20:25.278177    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:20:25.278718    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 12:20:25.278899    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 12:20:25.279020    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:20:27.200659    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:27.200659    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:27.200994    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:29.492497    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:20:29.492497    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:29.493314    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:20:29.584645    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0923 12:20:29.592601    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0923 12:20:29.619190    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0923 12:20:29.627345    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0923 12:20:29.655125    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0923 12:20:29.661295    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0923 12:20:29.688364    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0923 12:20:29.694472    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0923 12:20:29.720902    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0923 12:20:29.727529    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0923 12:20:29.755424    3340 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0923 12:20:29.765943    3340 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0923 12:20:29.788698    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:20:29.837699    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 12:20:29.884681    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:20:29.926994    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 12:20:29.978505    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0923 12:20:30.028613    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 12:20:30.075784    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:20:30.118699    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-565300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:20:30.162808    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:20:30.206365    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 12:20:30.251993    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 12:20:30.295181    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0923 12:20:30.323164    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0923 12:20:30.351814    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0923 12:20:30.381466    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0923 12:20:30.411508    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0923 12:20:30.443795    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0923 12:20:30.476591    3340 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0923 12:20:30.514443    3340 ssh_runner.go:195] Run: openssl version
	I0923 12:20:30.532198    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:20:30.560819    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:20:30.566835    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:20:30.575107    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:20:30.591982    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:20:30.620457    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 12:20:30.649748    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 12:20:30.656894    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 12:20:30.665713    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 12:20:30.682069    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 12:20:30.711829    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 12:20:30.741972    3340 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 12:20:30.749291    3340 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 12:20:30.761554    3340 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 12:20:30.779799    3340 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:20:30.810134    3340 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:20:30.816949    3340 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 12:20:30.817169    3340 kubeadm.go:934] updating node {m03 172.19.153.80 8443 v1.31.1 docker true true} ...
	I0923 12:20:30.817169    3340 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.153.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:20:30.817169    3340 kube-vip.go:115] generating kube-vip config ...
	I0923 12:20:30.825298    3340 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0923 12:20:30.857390    3340 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0923 12:20:30.857390    3340 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.159.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0923 12:20:30.867663    3340 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:20:30.889150    3340 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 12:20:30.898124    3340 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 12:20:30.914886    3340 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 12:20:30.914886    3340 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 12:20:30.914886    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:20:30.914886    3340 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 12:20:30.917181    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:20:30.928069    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:20:30.928297    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 12:20:30.931276    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 12:20:30.954476    3340 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:20:30.954476    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 12:20:30.954758    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 12:20:30.954758    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 12:20:30.955106    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 12:20:30.963680    3340 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 12:20:31.014734    3340 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 12:20:31.016110    3340 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 12:20:31.898713    3340 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0923 12:20:31.915394    3340 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0923 12:20:31.947672    3340 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:20:31.979750    3340 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0923 12:20:32.023381    3340 ssh_runner.go:195] Run: grep 172.19.159.254	control-plane.minikube.internal$ /etc/hosts
	I0923 12:20:32.029193    3340 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.159.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:20:32.064143    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:20:32.261236    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:20:32.289266    3340 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:20:32.289266    3340 start.go:317] joinCluster: &{Name:ha-565300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-565300 Namespace:default APIServerHAVIP:172.19.159.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.146.194 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.154.133 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.19.153.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:20:32.289266    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 12:20:32.289266    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:20:34.164421    3340 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:20:34.164421    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:34.165346    3340 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:20:36.443209    3340 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:20:36.444169    3340 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:20:36.444169    3340 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:20:36.633203    3340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3428791s)
	I0923 12:20:36.633302    3340 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.19.153.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:20:36.633302    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdy155.vq6ux4r409f7wy9t --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-565300-m03 --control-plane --apiserver-advertise-address=172.19.153.80 --apiserver-bind-port=8443"
	I0923 12:21:19.613533    3340 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdy155.vq6ux4r409f7wy9t --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-565300-m03 --control-plane --apiserver-advertise-address=172.19.153.80 --apiserver-bind-port=8443": (42.9766738s)
	I0923 12:21:19.613533    3340 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 12:21:20.430882    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565300-m03 minikube.k8s.io/updated_at=2024_09_23T12_21_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=ha-565300 minikube.k8s.io/primary=false
	I0923 12:21:20.591665    3340 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565300-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0923 12:21:20.733285    3340 start.go:319] duration metric: took 48.4407491s to joinCluster
	I0923 12:21:20.734266    3340 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.19.153.80 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:21:20.734266    3340 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:21:20.737832    3340 out.go:177] * Verifying Kubernetes components...
	I0923 12:21:20.747598    3340 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:21:21.094640    3340 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:21:21.124386    3340 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 12:21:21.125314    3340 kapi.go:59] client config for ha-565300: &rest.Config{Host:"https://172.19.159.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-565300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0923 12:21:21.125447    3340 kubeadm.go:483] Overriding stale ClientConfig host https://172.19.159.254:8443 with https://172.19.146.194:8443
	I0923 12:21:21.126524    3340 node_ready.go:35] waiting up to 6m0s for node "ha-565300-m03" to be "Ready" ...
	I0923 12:21:21.126789    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:21.126789    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:21.126886    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:21.126886    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:21.139434    3340 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0923 12:21:21.627311    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:21.627434    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:21.627434    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:21.627434    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:21.635818    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:21:22.126890    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:22.126890    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:22.126890    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:22.126890    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:22.134370    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:22.627425    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:22.627425    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:22.627425    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:22.627425    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:22.630482    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:23.127788    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:23.127853    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:23.127853    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:23.127853    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:23.131871    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:23.132537    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:23.627172    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:23.627172    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:23.627172    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:23.627172    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:23.637231    3340 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 12:21:24.127625    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:24.127648    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:24.127648    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:24.127648    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:24.469571    3340 round_trippers.go:574] Response Status: 200 OK in 341 milliseconds
	I0923 12:21:24.627882    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:24.627882    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:24.627882    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:24.627882    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:24.632614    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:25.127101    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:25.127101    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:25.127101    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:25.127101    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:25.131995    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:25.132862    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:25.628001    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:25.628001    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:25.628001    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:25.628001    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:25.656672    3340 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0923 12:21:26.128280    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:26.128358    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:26.128358    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:26.128406    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:26.136136    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:26.627578    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:26.627578    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:26.627578    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:26.627578    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:26.632240    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:27.127798    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:27.127798    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:27.127798    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:27.127798    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:27.134986    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:27.136056    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:27.627285    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:27.627285    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:27.627285    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:27.627285    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:27.631749    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:28.127870    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:28.127870    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:28.127870    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:28.127870    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:28.133034    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:28.628335    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:28.628400    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:28.628400    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:28.628400    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:28.632366    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:29.127512    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:29.127512    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:29.127512    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:29.127512    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:29.624094    3340 round_trippers.go:574] Response Status: 200 OK in 496 milliseconds
	I0923 12:21:29.625013    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:29.627426    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:29.627426    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:29.627426    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:29.627426    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:29.631853    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:30.128409    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:30.128409    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:30.128409    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:30.128409    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:30.780632    3340 round_trippers.go:574] Response Status: 200 OK in 652 milliseconds
	I0923 12:21:30.781645    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:30.781645    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:30.781645    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:30.781645    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:30.786281    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:31.128084    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:31.128084    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:31.128084    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:31.128084    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:31.133795    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:31.628046    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:31.628046    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:31.628046    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:31.628046    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:31.632655    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:31.633276    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:32.127988    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:32.127988    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:32.127988    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:32.127988    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:32.273508    3340 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I0923 12:21:32.627592    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:32.627592    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:32.627592    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:32.627592    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:32.631523    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:33.128697    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:33.128697    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:33.128697    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:33.128697    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:33.270635    3340 round_trippers.go:574] Response Status: 200 OK in 141 milliseconds
	I0923 12:21:33.628459    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:33.628459    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:33.628459    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:33.628459    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:33.633200    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:33.633760    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:34.128560    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:34.128560    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:34.128631    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:34.128631    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:34.135154    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:21:34.629108    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:34.629108    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:34.629108    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:34.629108    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:34.663927    3340 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0923 12:21:35.128561    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:35.128561    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:35.128561    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:35.128561    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:35.136589    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:21:35.628443    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:35.628844    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:35.628844    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:35.628844    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:35.633197    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:35.634146    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:36.128413    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:36.128413    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:36.128413    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:36.128413    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:36.229842    3340 round_trippers.go:574] Response Status: 200 OK in 101 milliseconds
	I0923 12:21:36.628226    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:36.628226    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:36.628226    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:36.628226    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:37.676753    3340 round_trippers.go:574] Response Status: 200 OK in 1048 milliseconds
	I0923 12:21:37.677028    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:37.677028    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:37.677028    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:37.677028    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:37.677028    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:37.686987    3340 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 12:21:38.128494    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:38.128494    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:38.128494    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:38.128494    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:38.133491    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:38.628965    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:38.628965    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:38.628965    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:38.628965    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:39.163544    3340 round_trippers.go:574] Response Status: 200 OK in 534 milliseconds
	I0923 12:21:39.164124    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:39.164124    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:39.164124    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:39.164124    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:39.186366    3340 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0923 12:21:39.628621    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:39.629133    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:39.629133    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:39.629133    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:40.154047    3340 round_trippers.go:574] Response Status: 200 OK in 524 milliseconds
	I0923 12:21:40.154987    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:40.155160    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:40.155160    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:40.155160    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:40.155160    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:40.160544    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:40.629380    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:40.629380    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:40.629447    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:40.629447    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:40.634154    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:41.128356    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:41.128918    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:41.128918    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:41.128918    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:41.133449    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:41.628866    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:41.628866    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:41.628866    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:41.628866    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:41.632064    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:42.128548    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:42.128548    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:42.128548    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:42.128548    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:42.177356    3340 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0923 12:21:42.178218    3340 node_ready.go:53] node "ha-565300-m03" has status "Ready":"False"
	I0923 12:21:42.628237    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:42.628237    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:42.628237    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:42.628237    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:42.632341    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.129542    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:43.129542    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.129542    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.129542    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.134363    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.135703    3340 node_ready.go:49] node "ha-565300-m03" has status "Ready":"True"
	I0923 12:21:43.135703    3340 node_ready.go:38] duration metric: took 22.0075913s for node "ha-565300-m03" to be "Ready" ...
	I0923 12:21:43.135759    3340 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:21:43.135842    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:21:43.135905    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.135905    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.135905    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.146535    3340 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 12:21:43.155412    3340 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.155412    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-7jzhc
	I0923 12:21:43.155412    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.155412    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.155412    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.160015    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.161602    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:43.161602    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.161602    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.161602    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.169631    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:43.170593    3340 pod_ready.go:93] pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:43.170593    3340 pod_ready.go:82] duration metric: took 15.1801ms for pod "coredns-7c65d6cfc9-7jzhc" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.170593    3340 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.170593    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-kf224
	I0923 12:21:43.170593    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.170593    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.170593    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.174351    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:43.176050    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:43.176109    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.176109    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.176165    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.180395    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.180395    3340 pod_ready.go:93] pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:43.180395    3340 pod_ready.go:82] duration metric: took 9.8012ms for pod "coredns-7c65d6cfc9-kf224" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.180395    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.181393    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300
	I0923 12:21:43.181393    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.181393    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.181393    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.184019    3340 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:21:43.185030    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:43.185030    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.185030    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.185030    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.190484    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:43.190744    3340 pod_ready.go:93] pod "etcd-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:43.190744    3340 pod_ready.go:82] duration metric: took 10.3476ms for pod "etcd-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.190744    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.190744    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300-m02
	I0923 12:21:43.190744    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.190744    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.190744    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.195973    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.196025    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:43.196025    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.196025    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.196571    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.203825    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:43.204537    3340 pod_ready.go:93] pod "etcd-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:43.204573    3340 pod_ready.go:82] duration metric: took 13.8288ms for pod "etcd-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.204573    3340 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:43.329771    3340 request.go:632] Waited for 125.113ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300-m03
	I0923 12:21:43.329771    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565300-m03
	I0923 12:21:43.329771    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.329771    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:43.329771    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.334280    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:43.529806    3340 request.go:632] Waited for 194.3407ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:43.529806    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:43.529806    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:43.529806    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:43.529806    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.140592    3340 round_trippers.go:574] Response Status: 200 OK in 610 milliseconds
	I0923 12:21:44.140724    3340 pod_ready.go:93] pod "etcd-ha-565300-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:44.140724    3340 pod_ready.go:82] duration metric: took 936.0875ms for pod "etcd-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.141260    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.141448    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300
	I0923 12:21:44.141473    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.141501    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.141501    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.146748    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:44.148764    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:44.148840    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.148840    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.148840    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.152173    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:44.152808    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:44.152843    3340 pod_ready.go:82] duration metric: took 11.5823ms for pod "kube-apiserver-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.152884    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.152963    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m02
	I0923 12:21:44.153005    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.153041    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.153041    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.156958    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:44.330808    3340 request.go:632] Waited for 173.8386ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:44.330808    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:44.330808    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.330808    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.330808    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.335395    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:44.336569    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:44.336625    3340 pod_ready.go:82] duration metric: took 183.7286ms for pod "kube-apiserver-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.336625    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.529942    3340 request.go:632] Waited for 193.1916ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m03
	I0923 12:21:44.529942    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565300-m03
	I0923 12:21:44.529942    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.529942    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.529942    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.534860    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:44.730499    3340 request.go:632] Waited for 194.7489ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:44.730499    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:44.730499    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.730499    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.730499    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.735616    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:44.736389    3340 pod_ready.go:93] pod "kube-apiserver-ha-565300-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:44.736389    3340 pod_ready.go:82] duration metric: took 399.6809ms for pod "kube-apiserver-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.736389    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:44.930285    3340 request.go:632] Waited for 193.8057ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300
	I0923 12:21:44.930285    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300
	I0923 12:21:44.930285    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:44.930285    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:44.930285    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:44.935962    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:45.130510    3340 request.go:632] Waited for 193.6585ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:45.130885    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:45.130995    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.130995    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.130995    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.135291    3340 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:21:45.136385    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:45.136450    3340 pod_ready.go:82] duration metric: took 400.034ms for pod "kube-controller-manager-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.136450    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.330843    3340 request.go:632] Waited for 194.3155ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m02
	I0923 12:21:45.330843    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m02
	I0923 12:21:45.330843    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.330843    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.330843    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.338488    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:45.530131    3340 request.go:632] Waited for 190.3373ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:45.530131    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:45.530131    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.530131    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.530131    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.542151    3340 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0923 12:21:45.543026    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:45.543026    3340 pod_ready.go:82] duration metric: took 406.5494ms for pod "kube-controller-manager-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.543026    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.730446    3340 request.go:632] Waited for 187.4073ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m03
	I0923 12:21:45.730446    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565300-m03
	I0923 12:21:45.730446    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.730446    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.730446    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.736091    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:45.930080    3340 request.go:632] Waited for 192.8022ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:45.930080    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:45.930080    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:45.930080    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:45.930080    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:45.936069    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:45.936798    3340 pod_ready.go:93] pod "kube-controller-manager-ha-565300-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:45.936865    3340 pod_ready.go:82] duration metric: took 393.8121ms for pod "kube-controller-manager-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:45.936923    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9fdqn" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.130038    3340 request.go:632] Waited for 193.0438ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fdqn
	I0923 12:21:46.130038    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fdqn
	I0923 12:21:46.130038    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.130038    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.130038    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.136063    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:46.330656    3340 request.go:632] Waited for 194.1853ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:46.330997    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:46.330997    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.330997    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.330997    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.334682    3340 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:21:46.335481    3340 pod_ready.go:93] pod "kube-proxy-9fdqn" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:46.335481    3340 pod_ready.go:82] duration metric: took 398.5311ms for pod "kube-proxy-9fdqn" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.335481    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jzwmh" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.529870    3340 request.go:632] Waited for 194.2332ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzwmh
	I0923 12:21:46.530259    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jzwmh
	I0923 12:21:46.530259    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.530259    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.530259    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.537100    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:21:46.729871    3340 request.go:632] Waited for 191.1244ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:46.730225    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:46.730293    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.730293    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.730293    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.738454    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:46.738578    3340 pod_ready.go:93] pod "kube-proxy-jzwmh" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:46.738578    3340 pod_ready.go:82] duration metric: took 403.0696ms for pod "kube-proxy-jzwmh" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.738578    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s4s8g" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:46.930002    3340 request.go:632] Waited for 191.4113ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s4s8g
	I0923 12:21:46.930002    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s4s8g
	I0923 12:21:46.930510    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:46.930510    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:46.930510    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:46.935554    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:47.130057    3340 request.go:632] Waited for 193.3999ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:47.130057    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:47.130057    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.130057    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.130057    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.138658    3340 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 12:21:47.139609    3340 pod_ready.go:93] pod "kube-proxy-s4s8g" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:47.139609    3340 pod_ready.go:82] duration metric: took 401.0039ms for pod "kube-proxy-s4s8g" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.139609    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.330035    3340 request.go:632] Waited for 190.4134ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300
	I0923 12:21:47.330035    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300
	I0923 12:21:47.330035    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.330035    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.330035    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.337612    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:47.531179    3340 request.go:632] Waited for 192.7814ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:47.531179    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300
	I0923 12:21:47.531179    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.531179    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.531179    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.538448    3340 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 12:21:47.539473    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:47.539530    3340 pod_ready.go:82] duration metric: took 399.8377ms for pod "kube-scheduler-ha-565300" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.539530    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.730673    3340 request.go:632] Waited for 190.9754ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m02
	I0923 12:21:47.730673    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m02
	I0923 12:21:47.730673    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.730673    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.730673    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.736033    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:47.930501    3340 request.go:632] Waited for 193.3078ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:47.930867    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m02
	I0923 12:21:47.930867    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:47.930867    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:47.930867    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:47.936657    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:47.937635    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:47.937699    3340 pod_ready.go:82] duration metric: took 398.1424ms for pod "kube-scheduler-ha-565300-m02" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:47.937699    3340 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:48.130812    3340 request.go:632] Waited for 192.9783ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m03
	I0923 12:21:48.130812    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565300-m03
	I0923 12:21:48.130812    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.130812    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.130812    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.293270    3340 round_trippers.go:574] Response Status: 200 OK in 162 milliseconds
	I0923 12:21:48.331110    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes/ha-565300-m03
	I0923 12:21:48.331307    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.331307    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.331307    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.391571    3340 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0923 12:21:48.392701    3340 pod_ready.go:93] pod "kube-scheduler-ha-565300-m03" in "kube-system" namespace has status "Ready":"True"
	I0923 12:21:48.392701    3340 pod_ready.go:82] duration metric: took 454.9152ms for pod "kube-scheduler-ha-565300-m03" in "kube-system" namespace to be "Ready" ...
	I0923 12:21:48.392769    3340 pod_ready.go:39] duration metric: took 5.2566558s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:21:48.392837    3340 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:21:48.402638    3340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:21:48.430037    3340 api_server.go:72] duration metric: took 27.6939011s to wait for apiserver process to appear ...
	I0923 12:21:48.430103    3340 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:21:48.430170    3340 api_server.go:253] Checking apiserver healthz at https://172.19.146.194:8443/healthz ...
	I0923 12:21:48.438019    3340 api_server.go:279] https://172.19.146.194:8443/healthz returned 200:
	ok
	I0923 12:21:48.438160    3340 round_trippers.go:463] GET https://172.19.146.194:8443/version
	I0923 12:21:48.438176    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.438176    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.438176    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.439418    3340 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:21:48.439581    3340 api_server.go:141] control plane version: v1.31.1
	I0923 12:21:48.439620    3340 api_server.go:131] duration metric: took 9.4105ms to wait for apiserver health ...
	I0923 12:21:48.439620    3340 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:21:48.530592    3340 request.go:632] Waited for 90.7524ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:21:48.530592    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:21:48.530592    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.530592    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.530592    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.873468    3340 round_trippers.go:574] Response Status: 200 OK in 342 milliseconds
	I0923 12:21:48.883799    3340 system_pods.go:59] 24 kube-system pods found
	I0923 12:21:48.883799    3340 system_pods.go:61] "coredns-7c65d6cfc9-7jzhc" [3410fd4d-a455-48c7-a6c3-7b3af6aa50a6] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "coredns-7c65d6cfc9-kf224" [08055950-19ea-4d96-b610-ca1d025c25c2] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "etcd-ha-565300" [fa5fe799-27bb-442e-9093-70d1f91fd7f3] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "etcd-ha-565300-m02" [18c247e2-8721-4662-b8db-b9174e535412] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "etcd-ha-565300-m03" [02e5f7e1-6097-482b-9c7f-d6a806858da2] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kindnet-gvvph" [c728d1b2-d98f-4947-a971-dca1b05ba54a] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kindnet-j45vw" [2bc2bb0f-f609-4780-a13e-3c0d3b8f20d7] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kindnet-jcj4l" [e9f183eb-5b54-4852-a996-4b4ce9a938d9] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-apiserver-ha-565300" [89e33fd1-9346-4a7d-a6c2-37a1cc636b58] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-apiserver-ha-565300-m02" [8c350e1d-ee2d-4a80-8ed8-8140a2b2e660] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-apiserver-ha-565300-m03" [639ce30d-84fa-4bb1-a0c9-52a8dc896100] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-controller-manager-ha-565300" [d4599166-8583-47c0-a3c8-dc8c28fac9a2] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-controller-manager-ha-565300-m02" [6f035dd0-acd5-4162-b0d1-f37dff03d62f] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-controller-manager-ha-565300-m03" [345dc9c1-d760-4ea8-90f1-62934babffe9] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-proxy-9fdqn" [de0503b5-3ec6-4d2f-bb9a-b8f670c1abcd] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-proxy-jzwmh" [335d0452-7c30-4fe2-b0bb-d79af97b1a2d] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-proxy-s4s8g" [85c46e0e-ab32-420e-a9b7-fee9d360c8ec] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-scheduler-ha-565300" [a9ea8c2a-bfe0-4c4d-9da8-fd3b48b518b1] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-scheduler-ha-565300-m02" [de3cea24-2ae5-4a8e-8dff-3baa6cbd136f] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-scheduler-ha-565300-m03" [305c9f7d-70a4-4a9f-b50d-5cdedfcd204b] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-vip-ha-565300" [800f2b80-94bc-4068-86eb-95bc7d58cdd7] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-vip-ha-565300-m02" [5a2386d6-9706-4c61-9e8a-b1a39838f0f9] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "kube-vip-ha-565300-m03" [757fd58d-0e45-4408-9832-027591ab9d09] Running
	I0923 12:21:48.883799    3340 system_pods.go:61] "storage-provisioner" [e8126304-9d6c-4f7f-ac79-f0bbf61690b3] Running
	I0923 12:21:48.883799    3340 system_pods.go:74] duration metric: took 444.1482ms to wait for pod list to return data ...
	I0923 12:21:48.883799    3340 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:21:48.883799    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:21:48.883799    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.883799    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.883799    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.889657    3340 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 12:21:48.889850    3340 default_sa.go:45] found service account: "default"
	I0923 12:21:48.889850    3340 default_sa.go:55] duration metric: took 6.0508ms for default service account to be created ...
	I0923 12:21:48.889900    3340 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:21:48.930213    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/namespaces/kube-system/pods
	I0923 12:21:48.930213    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:48.930213    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:48.930213    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:48.969663    3340 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0923 12:21:48.979299    3340 system_pods.go:86] 24 kube-system pods found
	I0923 12:21:48.979395    3340 system_pods.go:89] "coredns-7c65d6cfc9-7jzhc" [3410fd4d-a455-48c7-a6c3-7b3af6aa50a6] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "coredns-7c65d6cfc9-kf224" [08055950-19ea-4d96-b610-ca1d025c25c2] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "etcd-ha-565300" [fa5fe799-27bb-442e-9093-70d1f91fd7f3] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "etcd-ha-565300-m02" [18c247e2-8721-4662-b8db-b9174e535412] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "etcd-ha-565300-m03" [02e5f7e1-6097-482b-9c7f-d6a806858da2] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kindnet-gvvph" [c728d1b2-d98f-4947-a971-dca1b05ba54a] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kindnet-j45vw" [2bc2bb0f-f609-4780-a13e-3c0d3b8f20d7] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kindnet-jcj4l" [e9f183eb-5b54-4852-a996-4b4ce9a938d9] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-apiserver-ha-565300" [89e33fd1-9346-4a7d-a6c2-37a1cc636b58] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-apiserver-ha-565300-m02" [8c350e1d-ee2d-4a80-8ed8-8140a2b2e660] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-apiserver-ha-565300-m03" [639ce30d-84fa-4bb1-a0c9-52a8dc896100] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-controller-manager-ha-565300" [d4599166-8583-47c0-a3c8-dc8c28fac9a2] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-controller-manager-ha-565300-m02" [6f035dd0-acd5-4162-b0d1-f37dff03d62f] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-controller-manager-ha-565300-m03" [345dc9c1-d760-4ea8-90f1-62934babffe9] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-proxy-9fdqn" [de0503b5-3ec6-4d2f-bb9a-b8f670c1abcd] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-proxy-jzwmh" [335d0452-7c30-4fe2-b0bb-d79af97b1a2d] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-proxy-s4s8g" [85c46e0e-ab32-420e-a9b7-fee9d360c8ec] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-scheduler-ha-565300" [a9ea8c2a-bfe0-4c4d-9da8-fd3b48b518b1] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-scheduler-ha-565300-m02" [de3cea24-2ae5-4a8e-8dff-3baa6cbd136f] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-scheduler-ha-565300-m03" [305c9f7d-70a4-4a9f-b50d-5cdedfcd204b] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-vip-ha-565300" [800f2b80-94bc-4068-86eb-95bc7d58cdd7] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-vip-ha-565300-m02" [5a2386d6-9706-4c61-9e8a-b1a39838f0f9] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "kube-vip-ha-565300-m03" [757fd58d-0e45-4408-9832-027591ab9d09] Running
	I0923 12:21:48.979395    3340 system_pods.go:89] "storage-provisioner" [e8126304-9d6c-4f7f-ac79-f0bbf61690b3] Running
	I0923 12:21:48.979395    3340 system_pods.go:126] duration metric: took 89.489ms to wait for k8s-apps to be running ...
	I0923 12:21:48.979395    3340 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:21:48.987931    3340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:21:49.014481    3340 system_svc.go:56] duration metric: took 35.0832ms WaitForService to wait for kubelet
	I0923 12:21:49.014554    3340 kubeadm.go:582] duration metric: took 28.278379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:21:49.014625    3340 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:21:49.130252    3340 request.go:632] Waited for 115.5168ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.146.194:8443/api/v1/nodes
	I0923 12:21:49.130252    3340 round_trippers.go:463] GET https://172.19.146.194:8443/api/v1/nodes
	I0923 12:21:49.130252    3340 round_trippers.go:469] Request Headers:
	I0923 12:21:49.130252    3340 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 12:21:49.130252    3340 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:21:49.136945    3340 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:21:49.138100    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:21:49.138100    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:21:49.138213    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:21:49.138213    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:21:49.138213    3340 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:21:49.138213    3340 node_conditions.go:123] node cpu capacity is 2
	I0923 12:21:49.138213    3340 node_conditions.go:105] duration metric: took 123.5791ms to run NodePressure ...
	I0923 12:21:49.138213    3340 start.go:241] waiting for startup goroutines ...
	I0923 12:21:49.138213    3340 start.go:255] writing updated cluster config ...
	I0923 12:21:49.148042    3340 ssh_runner.go:195] Run: rm -f paused
	I0923 12:21:49.278631    3340 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 12:21:49.290159    3340 out.go:177] * Done! kubectl is now configured to use "ha-565300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.695698636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.723372777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.723458482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.723488084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:45 ha-565300 dockerd[1429]: time="2024-09-23T12:14:45.723649994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:45 ha-565300 cri-dockerd[1321]: time="2024-09-23T12:14:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a8c05c72015312af8a6c4b368cb2fd302186faa02e1caa119729602e1027f3ad/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 12:14:45 ha-565300 cri-dockerd[1321]: time="2024-09-23T12:14:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ec96b961c47200351c60f916faeae6e6d01781fb1659afec1103dd2255fa789d/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.103590014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.103764825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.103786727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.105837158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.153717627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.153851235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.153874737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:14:46 ha-565300 dockerd[1429]: time="2024-09-23T12:14:46.154009446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:22:32 ha-565300 dockerd[1429]: time="2024-09-23T12:22:32.343090308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:22:32 ha-565300 dockerd[1429]: time="2024-09-23T12:22:32.343430028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:22:32 ha-565300 dockerd[1429]: time="2024-09-23T12:22:32.343474331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:22:32 ha-565300 dockerd[1429]: time="2024-09-23T12:22:32.343634640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:22:32 ha-565300 cri-dockerd[1321]: time="2024-09-23T12:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/94d7ba7dd4e11e602b396a5754f5a9c0a4d8b23595aafe2181de568836040596/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 12:22:35 ha-565300 cri-dockerd[1321]: time="2024-09-23T12:22:35Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Sep 23 12:22:37 ha-565300 dockerd[1429]: time="2024-09-23T12:22:37.015010228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:22:37 ha-565300 dockerd[1429]: time="2024-09-23T12:22:37.015142337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:22:37 ha-565300 dockerd[1429]: time="2024-09-23T12:22:37.015161338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:22:37 ha-565300 dockerd[1429]: time="2024-09-23T12:22:37.015274745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff23db9d03c23       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Running             busybox                   0                   94d7ba7dd4e11       busybox-7dff88458-rjg7r
	21587833455a5       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   ec96b961c4720       storage-provisioner
	3913e82ea5d64       c69fa2e9cbf5f                                                                                         27 minutes ago      Running             coredns                   0                   a8c05c7201531       coredns-7c65d6cfc9-7jzhc
	9e936da45f9fc       c69fa2e9cbf5f                                                                                         27 minutes ago      Running             coredns                   0                   b694930c61f03       coredns-7c65d6cfc9-kf224
	ec009d58ec024       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              28 minutes ago      Running             kindnet-cni               0                   581d1866dc0e1       kindnet-gvvph
	5a8e37d9bdb76       60c005f310ff3                                                                                         28 minutes ago      Running             kube-proxy                0                   ada4b7396f1f9       kube-proxy-s4s8g
	e04d5fa3131b0       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     28 minutes ago      Running             kube-vip                  0                   0e4f892d50a24       kube-vip-ha-565300
	6557cb9820342       2e96e5913fc06                                                                                         28 minutes ago      Running             etcd                      0                   f17f48f36f54b       etcd-ha-565300
	bb14fd3d1b742       175ffd71cce3d                                                                                         28 minutes ago      Running             kube-controller-manager   0                   d5c4129b72c11       kube-controller-manager-ha-565300
	3c9ae68aa117b       9aa1fad941575                                                                                         28 minutes ago      Running             kube-scheduler            0                   9a0b7e2df2fe3       kube-scheduler-ha-565300
	d6fe896ee937c       6bab7719df100                                                                                         28 minutes ago      Running             kube-apiserver            0                   4ac1baf148601       kube-apiserver-ha-565300
	
	
	==> coredns [3913e82ea5d6] <==
	[INFO] 10.244.2.3:38504 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.14862568s
	[INFO] 10.244.0.4:56004 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000174112s
	[INFO] 10.244.0.4:52799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.01414357s
	[INFO] 10.244.1.2:36426 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000096507s
	[INFO] 10.244.2.3:51155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000324522s
	[INFO] 10.244.2.3:36383 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.048859679s
	[INFO] 10.244.2.3:53302 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170012s
	[INFO] 10.244.2.3:43083 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173512s
	[INFO] 10.244.0.4:56969 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000166811s
	[INFO] 10.244.0.4:36041 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.110350104s
	[INFO] 10.244.0.4:40805 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00028752s
	[INFO] 10.244.0.4:36040 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014601s
	[INFO] 10.244.1.2:43033 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000207914s
	[INFO] 10.244.1.2:35421 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000201613s
	[INFO] 10.244.1.2:53463 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162011s
	[INFO] 10.244.1.2:41559 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164811s
	[INFO] 10.244.1.2:59905 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173312s
	[INFO] 10.244.2.3:46533 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105507s
	[INFO] 10.244.0.4:33331 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135609s
	[INFO] 10.244.0.4:51753 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014631s
	[INFO] 10.244.1.2:38901 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117208s
	[INFO] 10.244.2.3:59701 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199213s
	[INFO] 10.244.0.4:42855 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000320521s
	[INFO] 10.244.0.4:46554 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00014041s
	[INFO] 10.244.1.2:36654 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000189313s
	
	
	==> coredns [9e936da45f9f] <==
	[INFO] 10.244.2.3:50668 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014649483s
	[INFO] 10.244.2.3:58314 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000228015s
	[INFO] 10.244.0.4:37445 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000267718s
	[INFO] 10.244.0.4:55085 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000203414s
	[INFO] 10.244.0.4:42792 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117908s
	[INFO] 10.244.0.4:38076 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000277418s
	[INFO] 10.244.1.2:50453 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000245016s
	[INFO] 10.244.1.2:48448 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000080205s
	[INFO] 10.244.1.2:48024 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00014181s
	[INFO] 10.244.2.3:50673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134209s
	[INFO] 10.244.2.3:33924 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137709s
	[INFO] 10.244.2.3:56280 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097907s
	[INFO] 10.244.0.4:41015 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000131709s
	[INFO] 10.244.0.4:57270 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000083506s
	[INFO] 10.244.1.2:56697 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091006s
	[INFO] 10.244.1.2:59874 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179412s
	[INFO] 10.244.1.2:51098 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198014s
	[INFO] 10.244.2.3:46102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152611s
	[INFO] 10.244.2.3:42225 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118308s
	[INFO] 10.244.2.3:53183 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123209s
	[INFO] 10.244.0.4:51947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247417s
	[INFO] 10.244.0.4:46586 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000189712s
	[INFO] 10.244.1.2:50141 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181312s
	[INFO] 10.244.1.2:52940 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000134009s
	[INFO] 10.244.1.2:41234 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108707s
	
	
	==> describe nodes <==
	Name:               ha-565300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-565300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_14_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:14:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:42:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:38:15 +0000   Mon, 23 Sep 2024 12:14:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:38:15 +0000   Mon, 23 Sep 2024 12:14:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:38:15 +0000   Mon, 23 Sep 2024 12:14:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:38:15 +0000   Mon, 23 Sep 2024 12:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.146.194
	  Hostname:    ha-565300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 277e2ef6a1034548ba796628eeb28a0c
	  System UUID:                c6a5291c-50da-454e-ae27-77fb67747768
	  Boot ID:                    a3f90f42-719a-4941-8f49-77af7d69f6fa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-rjg7r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-7jzhc             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 coredns-7c65d6cfc9-kf224             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-ha-565300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-gvvph                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-565300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-565300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-s4s8g                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-565300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-565300                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-565300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-565300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-565300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m   node-controller  Node ha-565300 event: Registered Node ha-565300 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-565300 status is now: NodeReady
	  Normal  RegisteredNode           24m   node-controller  Node ha-565300 event: Registered Node ha-565300 in Controller
	  Normal  RegisteredNode           21m   node-controller  Node ha-565300 event: Registered Node ha-565300 in Controller
	
	
	Name:               ha-565300-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-565300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_17_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:17:37 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:38:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 12:38:02 +0000   Mon, 23 Sep 2024 12:39:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 12:38:02 +0000   Mon, 23 Sep 2024 12:39:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 12:38:02 +0000   Mon, 23 Sep 2024 12:39:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 12:38:02 +0000   Mon, 23 Sep 2024 12:39:05 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.19.154.133
	  Hostname:    ha-565300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9cf6ce84e674600883910dd751f04ef
	  System UUID:                426d5aa4-7fc6-4a4b-8233-6561accfd3ed
	  Boot ID:                    f4fbe51f-d1ad-482c-b5a1-2346cd2181ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x4chx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 etcd-ha-565300-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         24m
	  kube-system                 kindnet-jcj4l                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-apiserver-ha-565300-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-565300-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-jzwmh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-565300-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-565300-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-565300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-565300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node ha-565300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node ha-565300-m02 event: Registered Node ha-565300-m02 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-565300-m02 event: Registered Node ha-565300-m02 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-565300-m02 event: Registered Node ha-565300-m02 in Controller
	  Normal  NodeNotReady             3m29s              node-controller  Node ha-565300-m02 status is now: NodeNotReady
	
	
	Name:               ha-565300-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-565300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_21_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:21:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:42:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:38:07 +0000   Mon, 23 Sep 2024 12:21:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:38:07 +0000   Mon, 23 Sep 2024 12:21:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:38:07 +0000   Mon, 23 Sep 2024 12:21:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:38:07 +0000   Mon, 23 Sep 2024 12:21:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.153.80
	  Hostname:    ha-565300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ad0f3c71bd3647bcbec4b56c1efbbcf7
	  System UUID:                267aef2d-fc53-c64f-8edf-0d874d3b3472
	  Boot ID:                    9c536902-5ca1-4323-91fd-b411caa4957e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45cpz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 etcd-ha-565300-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-j45vw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-565300-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-565300-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-9fdqn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-565300-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-565300-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node ha-565300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node ha-565300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node ha-565300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node ha-565300-m03 event: Registered Node ha-565300-m03 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-565300-m03 event: Registered Node ha-565300-m03 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-565300-m03 event: Registered Node ha-565300-m03 in Controller
	
	
	Name:               ha-565300-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565300-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=ha-565300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_27_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:27:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565300-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:42:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:37:47 +0000   Mon, 23 Sep 2024 12:27:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:37:47 +0000   Mon, 23 Sep 2024 12:27:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:37:47 +0000   Mon, 23 Sep 2024 12:27:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:37:47 +0000   Mon, 23 Sep 2024 12:27:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.147.53
	  Hostname:    ha-565300-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4603c3e2ca564cca919533a0d5f3dba5
	  System UUID:                bf717688-8dad-174e-8c3a-18780c2b7801
	  Boot ID:                    c38ae734-0565-4afc-83c9-24ddfd76e473
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8p2mf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-proxy-mmpgc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node ha-565300-m04 event: Registered Node ha-565300-m04 in Controller
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node ha-565300-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node ha-565300-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node ha-565300-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node ha-565300-m04 event: Registered Node ha-565300-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-565300-m04 event: Registered Node ha-565300-m04 in Controller
	  Normal  NodeReady                14m                kubelet          Node ha-565300-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.341465] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.288041] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep23 12:13] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.151887] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[ +27.199041] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.078626] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.476615] systemd-fstab-generator[1033]: Ignoring "noauto" option for root device
	[  +0.169199] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.213134] systemd-fstab-generator[1059]: Ignoring "noauto" option for root device
	[  +2.785501] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.184487] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.187603] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.241356] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[ +10.813058] systemd-fstab-generator[1415]: Ignoring "noauto" option for root device
	[  +0.098752] kauditd_printk_skb: 202 callbacks suppressed
	[Sep23 12:14] systemd-fstab-generator[1670]: Ignoring "noauto" option for root device
	[  +5.089158] systemd-fstab-generator[1812]: Ignoring "noauto" option for root device
	[  +0.087210] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.080182] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.945235] systemd-fstab-generator[2307]: Ignoring "noauto" option for root device
	[  +6.768468] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.374632] kauditd_printk_skb: 29 callbacks suppressed
	[Sep23 12:17] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6557cb982034] <==
	{"level":"warn","ts":"2024-09-23T12:42:34.233866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.333985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.434654Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.533867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.633870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.637559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.649404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.686132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.690288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.698084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.705667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.714022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.721433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.731373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.734650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.739434Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.747939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.754878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.760164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.764906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.775470Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.784936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.793845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.807213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-23T12:42:34.834117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a91358b7bd13ef6e","from":"a91358b7bd13ef6e","remote-peer-id":"c2991b7348d5d635","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:42:34 up 30 min,  0 users,  load average: 0.62, 0.59, 0.51
	Linux ha-565300 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ec009d58ec02] <==
	I0923 12:42:02.295760       1 main.go:322] Node ha-565300-m04 has CIDR [10.244.3.0/24] 
	I0923 12:42:12.296424       1 main.go:295] Handling node with IPs: map[172.19.146.194:{}]
	I0923 12:42:12.296476       1 main.go:299] handling current node
	I0923 12:42:12.296505       1 main.go:295] Handling node with IPs: map[172.19.154.133:{}]
	I0923 12:42:12.296520       1 main.go:322] Node ha-565300-m02 has CIDR [10.244.1.0/24] 
	I0923 12:42:12.297009       1 main.go:295] Handling node with IPs: map[172.19.153.80:{}]
	I0923 12:42:12.297141       1 main.go:322] Node ha-565300-m03 has CIDR [10.244.2.0/24] 
	I0923 12:42:12.297306       1 main.go:295] Handling node with IPs: map[172.19.147.53:{}]
	I0923 12:42:12.297332       1 main.go:322] Node ha-565300-m04 has CIDR [10.244.3.0/24] 
	I0923 12:42:22.289267       1 main.go:295] Handling node with IPs: map[172.19.146.194:{}]
	I0923 12:42:22.289302       1 main.go:299] handling current node
	I0923 12:42:22.289318       1 main.go:295] Handling node with IPs: map[172.19.154.133:{}]
	I0923 12:42:22.289325       1 main.go:322] Node ha-565300-m02 has CIDR [10.244.1.0/24] 
	I0923 12:42:22.289506       1 main.go:295] Handling node with IPs: map[172.19.153.80:{}]
	I0923 12:42:22.289577       1 main.go:322] Node ha-565300-m03 has CIDR [10.244.2.0/24] 
	I0923 12:42:22.289698       1 main.go:295] Handling node with IPs: map[172.19.147.53:{}]
	I0923 12:42:22.289720       1 main.go:322] Node ha-565300-m04 has CIDR [10.244.3.0/24] 
	I0923 12:42:32.289070       1 main.go:295] Handling node with IPs: map[172.19.154.133:{}]
	I0923 12:42:32.289169       1 main.go:322] Node ha-565300-m02 has CIDR [10.244.1.0/24] 
	I0923 12:42:32.289309       1 main.go:295] Handling node with IPs: map[172.19.153.80:{}]
	I0923 12:42:32.289420       1 main.go:322] Node ha-565300-m03 has CIDR [10.244.2.0/24] 
	I0923 12:42:32.289495       1 main.go:295] Handling node with IPs: map[172.19.147.53:{}]
	I0923 12:42:32.289517       1 main.go:322] Node ha-565300-m04 has CIDR [10.244.3.0/24] 
	I0923 12:42:32.289573       1 main.go:295] Handling node with IPs: map[172.19.146.194:{}]
	I0923 12:42:32.289594       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d6fe896ee937] <==
	I0923 12:14:17.525024       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 12:14:17.568176       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 12:14:17.590561       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 12:14:22.450804       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0923 12:14:23.045413       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0923 12:21:15.212477       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:21:15.213640       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0923 12:21:15.214375       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.801µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0923 12:21:15.215271       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0923 12:21:15.319953       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="39.782249ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-565300-m03.17f7dee8f5ebc58e" result=null
	E0923 12:23:15.675950       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56707: use of closed network connection
	E0923 12:23:17.312538       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56709: use of closed network connection
	E0923 12:23:17.800936       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56711: use of closed network connection
	E0923 12:23:18.360662       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56713: use of closed network connection
	E0923 12:23:18.988299       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56715: use of closed network connection
	E0923 12:23:19.453347       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56718: use of closed network connection
	E0923 12:23:19.951394       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56720: use of closed network connection
	E0923 12:23:20.447675       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56722: use of closed network connection
	E0923 12:23:20.912897       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56724: use of closed network connection
	E0923 12:23:21.777764       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56727: use of closed network connection
	E0923 12:23:32.248747       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56729: use of closed network connection
	E0923 12:23:32.707310       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56731: use of closed network connection
	E0923 12:23:43.195771       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56733: use of closed network connection
	E0923 12:23:43.651416       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56736: use of closed network connection
	E0923 12:23:54.123282       1 conn.go:339] Error on socket receive: read tcp 172.19.159.254:8443->172.19.144.1:56738: use of closed network connection
	
	
	==> kube-controller-manager [bb14fd3d1b74] <==
	I0923 12:27:08.931715       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m04"
	I0923 12:27:09.201138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m04"
	I0923 12:27:15.342323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m04"
	I0923 12:27:35.200905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m04"
	I0923 12:27:35.204705       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565300-m04"
	I0923 12:27:35.229944       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m04"
	I0923 12:27:35.280370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m04"
	I0923 12:27:50.334199       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m02"
	I0923 12:27:54.813733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m03"
	I0923 12:28:04.140317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300"
	I0923 12:32:41.171259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m04"
	I0923 12:32:56.475616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m02"
	I0923 12:33:00.854355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m03"
	I0923 12:33:10.019503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300"
	I0923 12:37:47.197501       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m04"
	I0923 12:38:02.643432       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m02"
	I0923 12:38:07.226618       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m03"
	I0923 12:38:15.780264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300"
	I0923 12:39:05.496431       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565300-m04"
	I0923 12:39:05.496539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m02"
	I0923 12:39:05.805769       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m02"
	I0923 12:39:05.984292       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.875767ms"
	I0923 12:39:05.984989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.903µs"
	I0923 12:39:07.373589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m02"
	I0923 12:39:12.308173       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565300-m02"
	
	
	==> kube-proxy [5a8e37d9bdb7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:14:24.478901       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:14:24.495540       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.146.194"]
	E0923 12:14:24.495616       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:14:24.556077       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:14:24.556120       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:14:24.556144       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:14:24.559499       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:14:24.559981       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:14:24.560112       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:14:24.561780       1 config.go:199] "Starting service config controller"
	I0923 12:14:24.561830       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:14:24.562014       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:14:24.562028       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:14:24.565530       1 config.go:328] "Starting node config controller"
	I0923 12:14:24.565569       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:14:24.662272       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:14:24.662298       1 shared_informer.go:320] Caches are synced for service config
	I0923 12:14:24.665811       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c9ae68aa117] <==
	W0923 12:14:15.770193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 12:14:15.770370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:15.821737       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 12:14:15.821776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 12:14:15.882072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 12:14:15.882184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 12:14:17.482559       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 12:22:25.442636       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 5f280359-8465-4f3e-9edb-aca9c8fdea2b(default/busybox-7dff88458-86bbx) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-86bbx"
	E0923 12:22:25.465512       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 5f280359-8465-4f3e-9edb-aca9c8fdea2b(default/busybox-7dff88458-86bbx) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-86bbx"
	I0923 12:22:25.465769       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-86bbx" node="ha-565300-m03"
	E0923 12:22:25.815213       1 schedule_one.go:953] "Scheduler cache AssumePod failed" err="pod 5d4542dc-bc77-4def-a133-8fac51f88c4e(default/busybox-7dff88458-45cpz) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-45cpz"
	E0923 12:22:25.815321       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="pod 5d4542dc-bc77-4def-a133-8fac51f88c4e(default/busybox-7dff88458-45cpz) is in the cache, so can't be assumed" pod="default/busybox-7dff88458-45cpz"
	I0923 12:22:25.815341       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-45cpz" node="ha-565300-m03"
	E0923 12:22:27.254449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qhcgz\": pod busybox-7dff88458-qhcgz is already assigned to node \"ha-565300\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qhcgz" node="ha-565300"
	E0923 12:22:27.297313       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2e3f06e7-cb04-4d02-9613-2b6d50f47a5e(default/busybox-7dff88458-qhcgz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-qhcgz"
	E0923 12:22:27.297359       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qhcgz\": pod busybox-7dff88458-qhcgz is already assigned to node \"ha-565300\"" pod="default/busybox-7dff88458-qhcgz"
	I0923 12:22:27.297398       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qhcgz" node="ha-565300"
	E0923 12:27:05.240459       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mmpgc\": pod kube-proxy-mmpgc is already assigned to node \"ha-565300-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mmpgc" node="ha-565300-m04"
	E0923 12:27:05.245215       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 88a2881f-9123-47fb-9f0e-0465a30c7564(kube-system/kube-proxy-mmpgc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-mmpgc"
	E0923 12:27:05.245432       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mmpgc\": pod kube-proxy-mmpgc is already assigned to node \"ha-565300-m04\"" pod="kube-system/kube-proxy-mmpgc"
	I0923 12:27:05.245497       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mmpgc" node="ha-565300-m04"
	E0923 12:27:05.241215       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wc2ht\": pod kindnet-wc2ht is already assigned to node \"ha-565300-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wc2ht" node="ha-565300-m04"
	E0923 12:27:05.250284       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 386fa0d8-d98b-407f-a139-06f4948ed82c(kube-system/kindnet-wc2ht) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-wc2ht"
	E0923 12:27:05.250377       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wc2ht\": pod kindnet-wc2ht is already assigned to node \"ha-565300-m04\"" pod="kube-system/kindnet-wc2ht"
	I0923 12:27:05.250431       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wc2ht" node="ha-565300-m04"
	
	
	==> kubelet <==
	Sep 23 12:38:17 ha-565300 kubelet[2314]: E0923 12:38:17.615936    2314 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:38:17 ha-565300 kubelet[2314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:38:17 ha-565300 kubelet[2314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:38:17 ha-565300 kubelet[2314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:38:17 ha-565300 kubelet[2314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:39:17 ha-565300 kubelet[2314]: E0923 12:39:17.615759    2314 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:39:17 ha-565300 kubelet[2314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:39:17 ha-565300 kubelet[2314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:39:17 ha-565300 kubelet[2314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:39:17 ha-565300 kubelet[2314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:40:17 ha-565300 kubelet[2314]: E0923 12:40:17.616345    2314 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:40:17 ha-565300 kubelet[2314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:40:17 ha-565300 kubelet[2314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:40:17 ha-565300 kubelet[2314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:40:17 ha-565300 kubelet[2314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:41:17 ha-565300 kubelet[2314]: E0923 12:41:17.614160    2314 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:41:17 ha-565300 kubelet[2314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:41:17 ha-565300 kubelet[2314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:41:17 ha-565300 kubelet[2314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:41:17 ha-565300 kubelet[2314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:42:17 ha-565300 kubelet[2314]: E0923 12:42:17.618228    2314 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:42:17 ha-565300 kubelet[2314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:42:17 ha-565300 kubelet[2314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:42:17 ha-565300 kubelet[2314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:42:17 ha-565300 kubelet[2314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-565300 -n ha-565300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-565300 -n ha-565300: (10.5381859s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (163.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (51.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-h4tgf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-h4tgf -- sh -c "ping -c 1 172.19.144.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-h4tgf -- sh -c "ping -c 1 172.19.144.1": exit status 1 (10.3991564s)

                                                
                                                
-- stdout --
	PING 172.19.144.1 (172.19.144.1): 56 data bytes
	
	--- 172.19.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.144.1) from pod (busybox-7dff88458-h4tgf): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-wwgwh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-wwgwh -- sh -c "ping -c 1 172.19.144.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-wwgwh -- sh -c "ping -c 1 172.19.144.1": exit status 1 (10.4168829s)

                                                
                                                
-- stdout --
	PING 172.19.144.1 (172.19.144.1): 56 data bytes
	
	--- 172.19.144.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.144.1) from pod (busybox-7dff88458-wwgwh): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-560300 -n multinode-560300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-560300 -n multinode-560300: (10.3519416s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 logs -n 25: (7.3738045s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-313100 ssh -- ls                    | mount-start-2-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:06 UTC | 23 Sep 24 13:06 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-313100                           | mount-start-1-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:06 UTC | 23 Sep 24 13:07 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-313100 ssh -- ls                    | mount-start-2-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:07 UTC | 23 Sep 24 13:07 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-313100                           | mount-start-2-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:07 UTC | 23 Sep 24 13:07 UTC |
	| start   | -p mount-start-2-313100                           | mount-start-2-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:07 UTC | 23 Sep 24 13:09 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:09 UTC |                     |
	|         | --profile mount-start-2-313100 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-313100 ssh -- ls                    | mount-start-2-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:09 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-313100                           | mount-start-2-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:09 UTC | 23 Sep 24 13:10 UTC |
	| delete  | -p mount-start-1-313100                           | mount-start-1-313100 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:10 UTC | 23 Sep 24 13:10 UTC |
	| start   | -p multinode-560300                               | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:10 UTC | 23 Sep 24 13:16 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- apply -f                   | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- rollout                    | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- get pods -o                | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- get pods -o                | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | busybox-7dff88458-h4tgf --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | busybox-7dff88458-wwgwh --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | busybox-7dff88458-h4tgf --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | busybox-7dff88458-wwgwh --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | busybox-7dff88458-h4tgf -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | busybox-7dff88458-wwgwh -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- get pods -o                | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC | 23 Sep 24 13:16 UTC |
	|         | busybox-7dff88458-h4tgf                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:16 UTC |                     |
	|         | busybox-7dff88458-h4tgf -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.144.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:17 UTC | 23 Sep 24 13:17 UTC |
	|         | busybox-7dff88458-wwgwh                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-560300 -- exec                       | multinode-560300     | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:17 UTC |                     |
	|         | busybox-7dff88458-wwgwh -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.144.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:10:09
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:10:09.270973    1580 out.go:345] Setting OutFile to fd 1068 ...
	I0923 13:10:09.324644    1580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:10:09.324644    1580 out.go:358] Setting ErrFile to fd 1744...
	I0923 13:10:09.324644    1580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:10:09.342216    1580 out.go:352] Setting JSON to false
	I0923 13:10:09.343859    1580 start.go:129] hostinfo: {"hostname":"minikube5","uptime":492985,"bootTime":1726604024,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 13:10:09.344869    1580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 13:10:09.346941    1580 out.go:177] * [multinode-560300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 13:10:09.350475    1580 notify.go:220] Checking for updates...
	I0923 13:10:09.352845    1580 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:10:09.357935    1580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:10:09.360102    1580 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 13:10:09.363229    1580 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:10:09.365761    1580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:10:09.369544    1580 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:10:09.370262    1580 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:10:14.166699    1580 out.go:177] * Using the hyperv driver based on user configuration
	I0923 13:10:14.169132    1580 start.go:297] selected driver: hyperv
	I0923 13:10:14.169651    1580 start.go:901] validating driver "hyperv" against <nil>
	I0923 13:10:14.169728    1580 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:10:14.210227    1580 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 13:10:14.211490    1580 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:10:14.211690    1580 cni.go:84] Creating CNI manager for ""
	I0923 13:10:14.211690    1580 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0923 13:10:14.211690    1580 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 13:10:14.211919    1580 start.go:340] cluster config:
	{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:10:14.212003    1580 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:10:14.215793    1580 out.go:177] * Starting "multinode-560300" primary control-plane node in "multinode-560300" cluster
	I0923 13:10:14.217793    1580 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:10:14.218595    1580 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 13:10:14.218595    1580 cache.go:56] Caching tarball of preloaded images
	I0923 13:10:14.218682    1580 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 13:10:14.218682    1580 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:10:14.219291    1580 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:10:14.219291    1580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json: {Name:mk53cfab1b74346e8ea9b7bd91517154d47cef5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:10:14.219938    1580 start.go:360] acquireMachinesLock for multinode-560300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:10:14.220631    1580 start.go:364] duration metric: took 657.2µs to acquireMachinesLock for "multinode-560300"
	I0923 13:10:14.220780    1580 start.go:93] Provisioning new machine with config: &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 13:10:14.220780    1580 start.go:125] createHost starting for "" (driver="hyperv")
	I0923 13:10:14.223104    1580 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 13:10:14.223158    1580 start.go:159] libmachine.API.Create for "multinode-560300" (driver="hyperv")
	I0923 13:10:14.223158    1580 client.go:168] LocalClient.Create starting
	I0923 13:10:14.223726    1580 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 13:10:14.223977    1580 main.go:141] libmachine: Decoding PEM data...
	I0923 13:10:14.223977    1580 main.go:141] libmachine: Parsing certificate...
	I0923 13:10:14.224157    1580 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 13:10:14.224366    1580 main.go:141] libmachine: Decoding PEM data...
	I0923 13:10:14.224405    1580 main.go:141] libmachine: Parsing certificate...
	I0923 13:10:14.224515    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 13:10:16.107290    1580 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 13:10:16.107290    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:16.107439    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 13:10:17.627094    1580 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 13:10:17.627094    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:17.628494    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 13:10:18.947705    1580 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 13:10:18.947705    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:18.948655    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 13:10:22.242983    1580 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 13:10:22.242983    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:22.245364    1580 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 13:10:22.646030    1580 main.go:141] libmachine: Creating SSH key...
	I0923 13:10:22.844782    1580 main.go:141] libmachine: Creating VM...
	I0923 13:10:22.844782    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 13:10:25.381376    1580 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 13:10:25.381376    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:25.381376    1580 main.go:141] libmachine: Using switch "Default Switch"
	I0923 13:10:25.382215    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 13:10:26.994201    1580 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 13:10:26.994906    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:26.995085    1580 main.go:141] libmachine: Creating VHD
	I0923 13:10:26.995085    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 13:10:30.385792    1580 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6F826F08-B1F2-45B6-9E39-1EB81D906008
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 13:10:30.385792    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:30.386792    1580 main.go:141] libmachine: Writing magic tar header
	I0923 13:10:30.386792    1580 main.go:141] libmachine: Writing SSH key tar header
	I0923 13:10:30.400574    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 13:10:33.356403    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:33.356403    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:33.357231    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\disk.vhd' -SizeBytes 20000MB
	I0923 13:10:35.660867    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:35.661690    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:35.661690    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-560300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0923 13:10:38.813205    1580 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-560300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 13:10:38.813205    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:38.813939    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-560300 -DynamicMemoryEnabled $false
	I0923 13:10:40.784916    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:40.785149    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:40.785149    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-560300 -Count 2
	I0923 13:10:42.684351    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:42.684351    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:42.685279    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-560300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\boot2docker.iso'
	I0923 13:10:44.913500    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:44.914443    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:44.914532    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-560300 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\disk.vhd'
	I0923 13:10:47.219061    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:47.219061    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:47.219061    1580 main.go:141] libmachine: Starting VM...
	I0923 13:10:47.219061    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-560300
	I0923 13:10:50.028928    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:50.028960    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:50.028999    1580 main.go:141] libmachine: Waiting for host to start...
	I0923 13:10:50.029029    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:10:52.006342    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:10:52.006342    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:52.006474    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:10:54.229151    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:54.229151    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:55.229573    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:10:57.131231    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:10:57.131231    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:10:57.131355    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:10:59.307428    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:10:59.307428    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:00.307967    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:02.231278    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:02.231278    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:02.231559    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:04.445666    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:11:04.445736    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:05.446358    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:07.387181    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:07.387181    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:07.387410    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:09.579282    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:11:09.579282    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:10.579777    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:12.493523    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:12.493523    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:12.494085    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:14.746580    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:14.746646    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:14.746646    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:16.638530    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:16.639580    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:16.639644    1580 machine.go:93] provisionDockerMachine start ...
	I0923 13:11:16.639801    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:18.502092    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:18.502092    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:18.502092    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:20.662334    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:20.662334    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:20.667574    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:11:20.680661    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.215 22 <nil> <nil>}
	I0923 13:11:20.680661    1580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:11:20.801520    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:11:20.801608    1580 buildroot.go:166] provisioning hostname "multinode-560300"
	I0923 13:11:20.801692    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:22.649989    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:22.649989    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:22.649989    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:24.849952    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:24.849952    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:24.854321    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:11:24.854909    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.215 22 <nil> <nil>}
	I0923 13:11:24.854909    1580 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-560300 && echo "multinode-560300" | sudo tee /etc/hostname
	I0923 13:11:25.002552    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-560300
	
	I0923 13:11:25.002552    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:26.828943    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:26.828943    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:26.829515    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:29.124862    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:29.125644    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:29.131450    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:11:29.132202    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.215 22 <nil> <nil>}
	I0923 13:11:29.132202    1580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-560300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-560300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-560300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:11:29.273824    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:11:29.273824    1580 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 13:11:29.273824    1580 buildroot.go:174] setting up certificates
	I0923 13:11:29.273824    1580 provision.go:84] configureAuth start
	I0923 13:11:29.273824    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:31.103585    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:31.103585    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:31.103856    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:33.328837    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:33.329847    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:33.330007    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:35.151816    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:35.151816    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:35.152772    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:37.346785    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:37.346785    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:37.346785    1580 provision.go:143] copyHostCerts
	I0923 13:11:37.347428    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 13:11:37.347768    1580 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 13:11:37.347796    1580 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 13:11:37.347929    1580 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 13:11:37.348654    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 13:11:37.348654    1580 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 13:11:37.348654    1580 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 13:11:37.349223    1580 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 13:11:37.349828    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 13:11:37.349828    1580 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 13:11:37.349828    1580 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 13:11:37.349828    1580 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 13:11:37.350915    1580 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-560300 san=[127.0.0.1 172.19.153.215 localhost minikube multinode-560300]
	I0923 13:11:37.619491    1580 provision.go:177] copyRemoteCerts
	I0923 13:11:37.634831    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:11:37.634831    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:39.474041    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:39.474041    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:39.474851    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:41.615391    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:41.615391    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:41.616893    1580 sshutil.go:53] new ssh client: &{IP:172.19.153.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:11:41.722014    1580 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.0869069s)
	I0923 13:11:41.722014    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 13:11:41.722014    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:11:41.763405    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 13:11:41.763986    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 13:11:41.803456    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 13:11:41.803456    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 13:11:41.850074    1580 provision.go:87] duration metric: took 12.5753678s to configureAuth
	I0923 13:11:41.850125    1580 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:11:41.850817    1580 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:11:41.850985    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:43.678351    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:43.678541    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:43.678541    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:45.875347    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:45.875347    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:45.879086    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:11:45.879612    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.215 22 <nil> <nil>}
	I0923 13:11:45.879612    1580 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:11:46.015272    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 13:11:46.015334    1580 buildroot.go:70] root file system type: tmpfs
	I0923 13:11:46.015669    1580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:11:46.015837    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:47.880607    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:47.880607    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:47.880607    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:50.067051    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:50.068047    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:50.071924    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:11:50.072172    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.215 22 <nil> <nil>}
	I0923 13:11:50.072172    1580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:11:50.220990    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:11:50.221113    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:52.077900    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:52.077900    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:52.078191    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:11:54.306374    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:11:54.306374    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:54.310670    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:11:54.311192    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.215 22 <nil> <nil>}
	I0923 13:11:54.311297    1580 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:11:56.421436    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 13:11:56.421522    1580 machine.go:96] duration metric: took 39.77916s to provisionDockerMachine
	I0923 13:11:56.421587    1580 client.go:171] duration metric: took 1m42.1915299s to LocalClient.Create
	I0923 13:11:56.421631    1580 start.go:167] duration metric: took 1m42.1915739s to libmachine.API.Create "multinode-560300"
	I0923 13:11:56.421749    1580 start.go:293] postStartSetup for "multinode-560300" (driver="hyperv")
	I0923 13:11:56.421749    1580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:11:56.431018    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:11:56.431018    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:11:58.294376    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:11:58.294376    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:11:58.294589    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:12:00.456596    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:12:00.456674    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:00.456983    1580 sshutil.go:53] new ssh client: &{IP:172.19.153.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:12:00.565209    1580 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.133912s)
	I0923 13:12:00.576590    1580 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:12:00.582166    1580 command_runner.go:130] > NAME=Buildroot
	I0923 13:12:00.582166    1580 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:12:00.582166    1580 command_runner.go:130] > ID=buildroot
	I0923 13:12:00.582166    1580 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:12:00.582166    1580 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:12:00.582453    1580 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:12:00.582453    1580 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 13:12:00.582453    1580 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 13:12:00.583421    1580 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 13:12:00.583421    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 13:12:00.591458    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:12:00.607487    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 13:12:00.650323    1580 start.go:296] duration metric: took 4.2282278s for postStartSetup
	I0923 13:12:00.653705    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:12:02.566507    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:12:02.566507    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:02.567554    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:12:04.790898    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:12:04.791359    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:04.791499    1580 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:12:04.793949    1580 start.go:128] duration metric: took 1m50.5657051s to createHost
	I0923 13:12:04.794041    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:12:06.659108    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:12:06.659108    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:06.659108    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:12:08.861451    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:12:08.861451    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:08.865692    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:12:08.866061    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.215 22 <nil> <nil>}
	I0923 13:12:08.866061    1580 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:12:08.999963    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727097129.207963514
	
	I0923 13:12:08.999963    1580 fix.go:216] guest clock: 1727097129.207963514
	I0923 13:12:08.999963    1580 fix.go:229] Guest: 2024-09-23 13:12:09.207963514 +0000 UTC Remote: 2024-09-23 13:12:04.7939493 +0000 UTC m=+115.585448101 (delta=4.414014214s)
	I0923 13:12:08.999963    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:12:10.850308    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:12:10.850308    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:10.851464    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:12:13.090734    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:12:13.090734    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:13.095512    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:12:13.096030    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.153.215 22 <nil> <nil>}
	I0923 13:12:13.096109    1580 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727097128
	I0923 13:12:13.224633    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 13:12:08 UTC 2024
	
	I0923 13:12:13.224704    1580 fix.go:236] clock set: Mon Sep 23 13:12:08 UTC 2024
	 (err=<nil>)
	I0923 13:12:13.224704    1580 start.go:83] releasing machines lock for "multinode-560300", held for 1m58.9960398s
	I0923 13:12:13.224927    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:12:15.058112    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:12:15.058251    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:15.058251    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:12:17.297495    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:12:17.297495    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:17.302649    1580 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 13:12:17.302831    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:12:17.310428    1580 ssh_runner.go:195] Run: cat /version.json
	I0923 13:12:17.310428    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:12:19.272976    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:12:19.272976    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:19.273325    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:12:19.276300    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:12:19.276455    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:19.276455    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:12:21.602167    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:12:21.602167    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:21.603141    1580 sshutil.go:53] new ssh client: &{IP:172.19.153.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:12:21.624479    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:12:21.624949    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:12:21.625098    1580 sshutil.go:53] new ssh client: &{IP:172.19.153.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:12:21.698464    1580 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 13:12:21.698464    1580 ssh_runner.go:235] Completed: cat /version.json: (4.3877406s)
	I0923 13:12:21.708536    1580 ssh_runner.go:195] Run: systemctl --version
	I0923 13:12:21.709229    1580 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 13:12:21.709229    1580 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4061913s)
	W0923 13:12:21.709229    1580 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 13:12:21.717515    1580 command_runner.go:130] > systemd 252 (252)
	I0923 13:12:21.717515    1580 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 13:12:21.726652    1580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:12:21.734872    1580 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 13:12:21.735307    1580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:12:21.743253    1580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:12:21.773031    1580 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 13:12:21.773031    1580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:12:21.773031    1580 start.go:495] detecting cgroup driver to use...
	I0923 13:12:21.773031    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0923 13:12:21.802409    1580 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 13:12:21.802764    1580 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 13:12:21.806361    1580 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 13:12:21.815184    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 13:12:21.842146    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 13:12:21.860692    1580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:12:21.874919    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:12:21.901675    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:12:21.928103    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:12:21.954715    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:12:21.985164    1580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:12:22.021985    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:12:22.053059    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:12:22.078613    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:12:22.107248    1580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:12:22.123136    1580 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:12:22.123276    1580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:12:22.134994    1580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:12:22.166548    1580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:12:22.188209    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:12:22.370641    1580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:12:22.397136    1580 start.go:495] detecting cgroup driver to use...
	I0923 13:12:22.408409    1580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:12:22.426586    1580 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 13:12:22.427366    1580 command_runner.go:130] > [Unit]
	I0923 13:12:22.427366    1580 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 13:12:22.427502    1580 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 13:12:22.427502    1580 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 13:12:22.427569    1580 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 13:12:22.427569    1580 command_runner.go:130] > StartLimitBurst=3
	I0923 13:12:22.427613    1580 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 13:12:22.427638    1580 command_runner.go:130] > [Service]
	I0923 13:12:22.427638    1580 command_runner.go:130] > Type=notify
	I0923 13:12:22.427638    1580 command_runner.go:130] > Restart=on-failure
	I0923 13:12:22.427638    1580 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 13:12:22.427743    1580 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 13:12:22.427743    1580 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 13:12:22.427743    1580 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 13:12:22.427743    1580 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 13:12:22.427861    1580 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 13:12:22.427861    1580 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 13:12:22.427861    1580 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 13:12:22.427988    1580 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 13:12:22.427988    1580 command_runner.go:130] > ExecStart=
	I0923 13:12:22.427988    1580 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 13:12:22.428091    1580 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 13:12:22.428091    1580 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 13:12:22.428091    1580 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 13:12:22.428091    1580 command_runner.go:130] > LimitNOFILE=infinity
	I0923 13:12:22.428190    1580 command_runner.go:130] > LimitNPROC=infinity
	I0923 13:12:22.428190    1580 command_runner.go:130] > LimitCORE=infinity
	I0923 13:12:22.428190    1580 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 13:12:22.428190    1580 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 13:12:22.428190    1580 command_runner.go:130] > TasksMax=infinity
	I0923 13:12:22.428190    1580 command_runner.go:130] > TimeoutStartSec=0
	I0923 13:12:22.428275    1580 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 13:12:22.428275    1580 command_runner.go:130] > Delegate=yes
	I0923 13:12:22.428275    1580 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 13:12:22.428275    1580 command_runner.go:130] > KillMode=process
	I0923 13:12:22.428275    1580 command_runner.go:130] > [Install]
	I0923 13:12:22.428360    1580 command_runner.go:130] > WantedBy=multi-user.target
	I0923 13:12:22.440307    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:12:22.470630    1580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:12:22.507267    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:12:22.539572    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:12:22.572381    1580 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:12:22.623565    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:12:22.643490    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:12:22.672004    1580 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 13:12:22.681418    1580 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:12:22.686501    1580 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 13:12:22.695073    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:12:22.710629    1580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:12:22.746493    1580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:12:22.928666    1580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:12:23.104006    1580 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:12:23.104246    1580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:12:23.148422    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:12:23.323716    1580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:12:25.844053    1580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5201675s)
	I0923 13:12:25.860280    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:12:25.890703    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:12:25.920457    1580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:12:26.093226    1580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:12:26.264535    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:12:26.448697    1580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:12:26.482739    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:12:26.513601    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:12:26.683983    1580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:12:26.774322    1580 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:12:26.784827    1580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:12:26.791944    1580 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 13:12:26.791944    1580 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:12:26.791944    1580 command_runner.go:130] > Device: 0,22	Inode: 889         Links: 1
	I0923 13:12:26.791944    1580 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 13:12:26.791944    1580 command_runner.go:130] > Access: 2024-09-23 13:12:26.916123396 +0000
	I0923 13:12:26.791944    1580 command_runner.go:130] > Modify: 2024-09-23 13:12:26.916123396 +0000
	I0923 13:12:26.792959    1580 command_runner.go:130] > Change: 2024-09-23 13:12:26.919123586 +0000
	I0923 13:12:26.792959    1580 command_runner.go:130] >  Birth: -
	I0923 13:12:26.793019    1580 start.go:563] Will wait 60s for crictl version
	I0923 13:12:26.801679    1580 ssh_runner.go:195] Run: which crictl
	I0923 13:12:26.807414    1580 command_runner.go:130] > /usr/bin/crictl
	I0923 13:12:26.815866    1580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:12:26.862486    1580 command_runner.go:130] > Version:  0.1.0
	I0923 13:12:26.862791    1580 command_runner.go:130] > RuntimeName:  docker
	I0923 13:12:26.862791    1580 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 13:12:26.862791    1580 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:12:26.862791    1580 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:12:26.869496    1580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:12:26.899217    1580 command_runner.go:130] > 27.3.0
	I0923 13:12:26.909797    1580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:12:26.935326    1580 command_runner.go:130] > 27.3.0
	I0923 13:12:26.940017    1580 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:12:26.940017    1580 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 13:12:26.943854    1580 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 13:12:26.943854    1580 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 13:12:26.943854    1580 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 13:12:26.943854    1580 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 13:12:26.945407    1580 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 13:12:26.945407    1580 ip.go:214] interface addr: 172.19.144.1/20
	I0923 13:12:26.954150    1580 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 13:12:26.959484    1580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:12:26.978346    1580 kubeadm.go:883] updating cluster {Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:12:26.979029    1580 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:12:26.985391    1580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 13:12:27.003864    1580 docker.go:685] Got preloaded images: 
	I0923 13:12:27.003963    1580 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0923 13:12:27.014845    1580 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 13:12:27.034302    1580 command_runner.go:139] > {"Repositories":{}}
	I0923 13:12:27.044533    1580 ssh_runner.go:195] Run: which lz4
	I0923 13:12:27.048896    1580 command_runner.go:130] > /usr/bin/lz4
	I0923 13:12:27.049675    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0923 13:12:27.058143    1580 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 13:12:27.063515    1580 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 13:12:27.063610    1580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 13:12:27.063791    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0923 13:12:28.341223    1580 docker.go:649] duration metric: took 1.2911526s to copy over tarball
	I0923 13:12:28.349555    1580 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 13:12:36.805535    1580 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4548553s)
	I0923 13:12:36.805627    1580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 13:12:36.864567    1580 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 13:12:36.885718    1580 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.15-0":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a":"sha256:2e96e5913fc06e3d26915af3d0f
2ca5048cc4b6327e661e80da792cbf8d8d9d4"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.31.1":"sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb":"sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.31.1":"sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1":"sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.31.1":"sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44":"sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5e
edcaf06a0a89561"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.31.1":"sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0":"sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I0923 13:12:36.885718    1580 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0923 13:12:36.923981    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:12:37.097872    1580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:12:40.354268    1580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2561758s)
	I0923 13:12:40.362280    1580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 13:12:40.385440    1580 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 13:12:40.386466    1580 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 13:12:40.386466    1580 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 13:12:40.386466    1580 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 13:12:40.386466    1580 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 13:12:40.386466    1580 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 13:12:40.386548    1580 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 13:12:40.386548    1580 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:12:40.386605    1580 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 13:12:40.386671    1580 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:12:40.386671    1580 kubeadm.go:934] updating node { 172.19.153.215 8443 v1.31.1 docker true true} ...
	I0923 13:12:40.386671    1580 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-560300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.153.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:12:40.393184    1580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 13:12:40.462864    1580 command_runner.go:130] > cgroupfs
	I0923 13:12:40.464678    1580 cni.go:84] Creating CNI manager for ""
	I0923 13:12:40.464678    1580 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 13:12:40.464678    1580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:12:40.464782    1580 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.153.215 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-560300 NodeName:multinode-560300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.153.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.153.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:12:40.464914    1580 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.153.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-560300"
	  kubeletExtraArgs:
	    node-ip: 172.19.153.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.153.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:12:40.473681    1580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:12:40.492352    1580 command_runner.go:130] > kubeadm
	I0923 13:12:40.492498    1580 command_runner.go:130] > kubectl
	I0923 13:12:40.492498    1580 command_runner.go:130] > kubelet
	I0923 13:12:40.492498    1580 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:12:40.500916    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:12:40.518603    1580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 13:12:40.551528    1580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:12:40.580283    1580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0923 13:12:40.620258    1580 ssh_runner.go:195] Run: grep 172.19.153.215	control-plane.minikube.internal$ /etc/hosts
	I0923 13:12:40.625692    1580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.153.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:12:40.654550    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:12:40.827265    1580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:12:40.854622    1580 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300 for IP: 172.19.153.215
	I0923 13:12:40.854622    1580 certs.go:194] generating shared ca certs ...
	I0923 13:12:40.854688    1580 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:12:40.855420    1580 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 13:12:40.855755    1580 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 13:12:40.855978    1580 certs.go:256] generating profile certs ...
	I0923 13:12:40.856509    1580 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\client.key
	I0923 13:12:40.856625    1580 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\client.crt with IP's: []
	I0923 13:12:41.592647    1580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\client.crt ...
	I0923 13:12:41.592647    1580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\client.crt: {Name:mk5a5c28e7cf0703424735154d37c27a7a684332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:12:41.594705    1580 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\client.key ...
	I0923 13:12:41.594705    1580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\client.key: {Name:mke802e2658e381d5bf721e8019af409716bc6b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:12:41.594943    1580 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.8a0cfd4f
	I0923 13:12:41.595880    1580 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.8a0cfd4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.153.215]
	I0923 13:12:41.744729    1580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.8a0cfd4f ...
	I0923 13:12:41.744729    1580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.8a0cfd4f: {Name:mk42a98e1b4b4c90ca467c04172593fad8ac1d7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:12:41.745444    1580 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.8a0cfd4f ...
	I0923 13:12:41.745444    1580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.8a0cfd4f: {Name:mk466323293aeb5e7488c86dfed8fb84a9414f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:12:41.746348    1580 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.8a0cfd4f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt
	I0923 13:12:41.763723    1580 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.8a0cfd4f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key
	I0923 13:12:41.765069    1580 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key
	I0923 13:12:41.765164    1580 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt with IP's: []
	I0923 13:12:41.895635    1580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt ...
	I0923 13:12:41.895635    1580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt: {Name:mkb1750e354bd947d0b35ec28348879cb5fb2946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:12:41.896907    1580 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key ...
	I0923 13:12:41.896907    1580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key: {Name:mk61a346b4fb316dc1eff995b3831f8e209ef90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:12:41.897181    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:12:41.898218    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:12:41.898218    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:12:41.898218    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:12:41.898218    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:12:41.898218    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:12:41.898751    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:12:41.907980    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:12:41.910772    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 13:12:41.910772    1580 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 13:12:41.910772    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 13:12:41.910772    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 13:12:41.911781    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 13:12:41.911781    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 13:12:41.911781    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 13:12:41.912483    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:12:41.912681    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 13:12:41.912942    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 13:12:41.914687    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:12:41.962854    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 13:12:42.005242    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:12:42.047226    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:12:42.091917    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 13:12:42.133301    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 13:12:42.177223    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:12:42.222854    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:12:42.266146    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:12:42.310104    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 13:12:42.353374    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 13:12:42.399411    1580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:12:42.442584    1580 ssh_runner.go:195] Run: openssl version
	I0923 13:12:42.451263    1580 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:12:42.460864    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:12:42.489934    1580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:12:42.495735    1580 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:12:42.496611    1580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:12:42.505740    1580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:12:42.514022    1580 command_runner.go:130] > b5213941
	I0923 13:12:42.524736    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:12:42.551405    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 13:12:42.578372    1580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 13:12:42.584223    1580 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:12:42.584223    1580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:12:42.593305    1580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 13:12:42.601597    1580 command_runner.go:130] > 51391683
	I0923 13:12:42.611278    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 13:12:42.639438    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 13:12:42.667558    1580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 13:12:42.673991    1580 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:12:42.673991    1580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:12:42.681820    1580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 13:12:42.689382    1580 command_runner.go:130] > 3ec20f2e
	I0923 13:12:42.700790    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:12:42.727916    1580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:12:42.733931    1580 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:12:42.734274    1580 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:12:42.734479    1580 kubeadm.go:392] StartCluster: {Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:12:42.740346    1580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 13:12:42.772454    1580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:12:42.791182    1580 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0923 13:12:42.791335    1580 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0923 13:12:42.791335    1580 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0923 13:12:42.800449    1580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:12:42.828752    1580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:12:42.849869    1580 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0923 13:12:42.850294    1580 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0923 13:12:42.850294    1580 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0923 13:12:42.850294    1580 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:12:42.850543    1580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:12:42.850582    1580 kubeadm.go:157] found existing configuration files:
	
	I0923 13:12:42.859265    1580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:12:42.876793    1580 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:12:42.876793    1580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:12:42.887348    1580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:12:42.913327    1580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:12:42.930508    1580 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:12:42.930695    1580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:12:42.939848    1580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:12:42.966666    1580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:12:42.982316    1580 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:12:42.982700    1580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:12:42.994618    1580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:12:43.021256    1580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:12:43.038173    1580 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:12:43.038173    1580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:12:43.046103    1580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:12:43.063082    1580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 13:12:43.257250    1580 command_runner.go:130] ! W0923 13:12:43.467907    1763 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:12:43.257370    1580 kubeadm.go:310] W0923 13:12:43.467907    1763 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:12:43.257706    1580 command_runner.go:130] ! W0923 13:12:43.469036    1763 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:12:43.257706    1580 kubeadm.go:310] W0923 13:12:43.469036    1763 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:12:43.406295    1580 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:12:43.406295    1580 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:12:55.074536    1580 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 13:12:55.074694    1580 command_runner.go:130] > [init] Using Kubernetes version: v1.31.1
	I0923 13:12:55.074910    1580 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 13:12:55.074910    1580 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:12:55.075182    1580 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:12:55.075265    1580 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 13:12:55.075454    1580 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:12:55.075560    1580 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 13:12:55.075986    1580 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 13:12:55.076068    1580 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 13:12:55.076259    1580 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:12:55.076323    1580 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:12:55.080505    1580 out.go:235]   - Generating certificates and keys ...
	I0923 13:12:55.080505    1580 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0923 13:12:55.080505    1580 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 13:12:55.081104    1580 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 13:12:55.081104    1580 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0923 13:12:55.081104    1580 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 13:12:55.081104    1580 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 13:12:55.081104    1580 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0923 13:12:55.081104    1580 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 13:12:55.081104    1580 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 13:12:55.081104    1580 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0923 13:12:55.081634    1580 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 13:12:55.081774    1580 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0923 13:12:55.081774    1580 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0923 13:12:55.081774    1580 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 13:12:55.081774    1580 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-560300] and IPs [172.19.153.215 127.0.0.1 ::1]
	I0923 13:12:55.081774    1580 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-560300] and IPs [172.19.153.215 127.0.0.1 ::1]
	I0923 13:12:55.081774    1580 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0923 13:12:55.082291    1580 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 13:12:55.082437    1580 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-560300] and IPs [172.19.153.215 127.0.0.1 ::1]
	I0923 13:12:55.082437    1580 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-560300] and IPs [172.19.153.215 127.0.0.1 ::1]
	I0923 13:12:55.082437    1580 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 13:12:55.082437    1580 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 13:12:55.082437    1580 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 13:12:55.082437    1580 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 13:12:55.082437    1580 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0923 13:12:55.082437    1580 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 13:12:55.082437    1580 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:12:55.082437    1580 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:12:55.082437    1580 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:12:55.082437    1580 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:12:55.083306    1580 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:12:55.083306    1580 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:12:55.083553    1580 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:12:55.083553    1580 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:12:55.083553    1580 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:12:55.083553    1580 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:12:55.083553    1580 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:12:55.083553    1580 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:12:55.084134    1580 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:12:55.084134    1580 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:12:55.084134    1580 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:12:55.084134    1580 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:12:55.087435    1580 out.go:235]   - Booting up control plane ...
	I0923 13:12:55.088231    1580 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:12:55.088231    1580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:12:55.088492    1580 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:12:55.088492    1580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:12:55.088492    1580 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:12:55.088492    1580 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:12:55.088492    1580 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:12:55.088492    1580 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:12:55.089071    1580 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:12:55.089071    1580 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:12:55.089071    1580 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 13:12:55.089071    1580 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 13:12:55.089651    1580 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 13:12:55.089651    1580 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 13:12:55.089651    1580 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:12:55.089651    1580 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:12:55.089651    1580 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.539992ms
	I0923 13:12:55.089651    1580 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.539992ms
	I0923 13:12:55.089651    1580 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 13:12:55.090191    1580 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 13:12:55.090294    1580 command_runner.go:130] > [api-check] The API server is healthy after 6.502799738s
	I0923 13:12:55.090294    1580 kubeadm.go:310] [api-check] The API server is healthy after 6.502799738s
	I0923 13:12:55.090294    1580 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 13:12:55.090673    1580 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 13:12:55.090767    1580 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 13:12:55.090767    1580 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 13:12:55.090767    1580 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 13:12:55.090767    1580 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0923 13:12:55.091334    1580 command_runner.go:130] > [mark-control-plane] Marking the node multinode-560300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 13:12:55.091334    1580 kubeadm.go:310] [mark-control-plane] Marking the node multinode-560300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 13:12:55.091334    1580 command_runner.go:130] > [bootstrap-token] Using token: asd4m9.4hbpqywb5knl0df8
	I0923 13:12:55.091334    1580 kubeadm.go:310] [bootstrap-token] Using token: asd4m9.4hbpqywb5knl0df8
	I0923 13:12:55.095187    1580 out.go:235]   - Configuring RBAC rules ...
	I0923 13:12:55.095948    1580 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 13:12:55.095948    1580 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 13:12:55.096258    1580 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 13:12:55.096258    1580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 13:12:55.096532    1580 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 13:12:55.096532    1580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 13:12:55.096752    1580 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 13:12:55.096752    1580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 13:12:55.096863    1580 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 13:12:55.096863    1580 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 13:12:55.097000    1580 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 13:12:55.097000    1580 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 13:12:55.097131    1580 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 13:12:55.097131    1580 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 13:12:55.097131    1580 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 13:12:55.097269    1580 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0923 13:12:55.097360    1580 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 13:12:55.097360    1580 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0923 13:12:55.097360    1580 kubeadm.go:310] 
	I0923 13:12:55.097684    1580 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 13:12:55.097722    1580 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0923 13:12:55.097722    1580 kubeadm.go:310] 
	I0923 13:12:55.097766    1580 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0923 13:12:55.097766    1580 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 13:12:55.097766    1580 kubeadm.go:310] 
	I0923 13:12:55.097766    1580 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0923 13:12:55.097766    1580 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 13:12:55.097766    1580 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 13:12:55.097766    1580 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 13:12:55.097766    1580 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 13:12:55.097766    1580 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 13:12:55.098339    1580 kubeadm.go:310] 
	I0923 13:12:55.098473    1580 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 13:12:55.098473    1580 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0923 13:12:55.098473    1580 kubeadm.go:310] 
	I0923 13:12:55.098941    1580 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 13:12:55.098941    1580 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 13:12:55.098941    1580 kubeadm.go:310] 
	I0923 13:12:55.099108    1580 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 13:12:55.099108    1580 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0923 13:12:55.099108    1580 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 13:12:55.099108    1580 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 13:12:55.099327    1580 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 13:12:55.099327    1580 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 13:12:55.099327    1580 kubeadm.go:310] 
	I0923 13:12:55.099327    1580 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0923 13:12:55.099327    1580 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 13:12:55.099855    1580 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 13:12:55.099855    1580 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0923 13:12:55.099895    1580 kubeadm.go:310] 
	I0923 13:12:55.100063    1580 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token asd4m9.4hbpqywb5knl0df8 \
	I0923 13:12:55.100063    1580 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token asd4m9.4hbpqywb5knl0df8 \
	I0923 13:12:55.100298    1580 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 \
	I0923 13:12:55.100298    1580 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 \
	I0923 13:12:55.100298    1580 kubeadm.go:310] 	--control-plane 
	I0923 13:12:55.100678    1580 command_runner.go:130] > 	--control-plane 
	I0923 13:12:55.100678    1580 kubeadm.go:310] 
	I0923 13:12:55.100980    1580 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0923 13:12:55.100980    1580 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 13:12:55.100980    1580 kubeadm.go:310] 
	I0923 13:12:55.101324    1580 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token asd4m9.4hbpqywb5knl0df8 \
	I0923 13:12:55.101324    1580 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token asd4m9.4hbpqywb5knl0df8 \
	I0923 13:12:55.101545    1580 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 13:12:55.101545    1580 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 13:12:55.101545    1580 cni.go:84] Creating CNI manager for ""
	I0923 13:12:55.101545    1580 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0923 13:12:55.104547    1580 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 13:12:55.114592    1580 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 13:12:55.123187    1580 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0923 13:12:55.123187    1580 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0923 13:12:55.123187    1580 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0923 13:12:55.123187    1580 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:12:55.123187    1580 command_runner.go:130] > Access: 2024-09-23 13:11:14.314180100 +0000
	I0923 13:12:55.123187    1580 command_runner.go:130] > Modify: 2024-09-20 04:01:25.000000000 +0000
	I0923 13:12:55.123187    1580 command_runner.go:130] > Change: 2024-09-23 13:11:04.415000000 +0000
	I0923 13:12:55.123187    1580 command_runner.go:130] >  Birth: -
	I0923 13:12:55.123187    1580 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 13:12:55.123187    1580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 13:12:55.159632    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 13:12:55.634735    1580 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0923 13:12:55.634820    1580 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0923 13:12:55.634820    1580 command_runner.go:130] > serviceaccount/kindnet created
	I0923 13:12:55.634820    1580 command_runner.go:130] > daemonset.apps/kindnet created
	I0923 13:12:55.634982    1580 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:12:55.645575    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:55.646856    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-560300 minikube.k8s.io/updated_at=2024_09_23T13_12_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=multinode-560300 minikube.k8s.io/primary=true
	I0923 13:12:55.654494    1580 command_runner.go:130] > -16
	I0923 13:12:55.654494    1580 ops.go:34] apiserver oom_adj: -16
	I0923 13:12:55.883907    1580 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0923 13:12:55.893781    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:55.899102    1580 command_runner.go:130] > node/multinode-560300 labeled
	I0923 13:12:55.994165    1580 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0923 13:12:56.393742    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:56.496765    1580 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0923 13:12:56.892518    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:56.994735    1580 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0923 13:12:57.396924    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:57.493661    1580 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0923 13:12:57.897567    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:57.998650    1580 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0923 13:12:58.393305    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:58.482999    1580 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0923 13:12:58.896763    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:59.013333    1580 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0923 13:12:59.393775    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:12:59.531691    1580 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0923 13:12:59.894241    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 13:13:00.028956    1580 command_runner.go:130] > NAME      SECRETS   AGE
	I0923 13:13:00.028956    1580 command_runner.go:130] > default   0         1s
	I0923 13:13:00.028956    1580 kubeadm.go:1113] duration metric: took 4.3936768s to wait for elevateKubeSystemPrivileges
	I0923 13:13:00.028956    1580 kubeadm.go:394] duration metric: took 17.2933093s to StartCluster
	I0923 13:13:00.028956    1580 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:13:00.029271    1580 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:13:00.030267    1580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:13:00.031263    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 13:13:00.031263    1580 start.go:235] Will wait 6m0s for node &{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 13:13:00.031263    1580 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 13:13:00.031263    1580 addons.go:69] Setting storage-provisioner=true in profile "multinode-560300"
	I0923 13:13:00.031263    1580 addons.go:234] Setting addon storage-provisioner=true in "multinode-560300"
	I0923 13:13:00.031263    1580 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:13:00.031263    1580 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:13:00.031263    1580 addons.go:69] Setting default-storageclass=true in profile "multinode-560300"
	I0923 13:13:00.031263    1580 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-560300"
	I0923 13:13:00.032285    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:13:00.033249    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:13:00.039257    1580 out.go:177] * Verifying Kubernetes components...
	I0923 13:13:00.054256    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:13:00.199512    1580 command_runner.go:130] > apiVersion: v1
	I0923 13:13:00.199591    1580 command_runner.go:130] > data:
	I0923 13:13:00.199591    1580 command_runner.go:130] >   Corefile: |
	I0923 13:13:00.199591    1580 command_runner.go:130] >     .:53 {
	I0923 13:13:00.199591    1580 command_runner.go:130] >         errors
	I0923 13:13:00.199591    1580 command_runner.go:130] >         health {
	I0923 13:13:00.199591    1580 command_runner.go:130] >            lameduck 5s
	I0923 13:13:00.199661    1580 command_runner.go:130] >         }
	I0923 13:13:00.199661    1580 command_runner.go:130] >         ready
	I0923 13:13:00.199661    1580 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0923 13:13:00.199661    1580 command_runner.go:130] >            pods insecure
	I0923 13:13:00.199724    1580 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0923 13:13:00.199724    1580 command_runner.go:130] >            ttl 30
	I0923 13:13:00.199724    1580 command_runner.go:130] >         }
	I0923 13:13:00.199724    1580 command_runner.go:130] >         prometheus :9153
	I0923 13:13:00.199724    1580 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0923 13:13:00.199724    1580 command_runner.go:130] >            max_concurrent 1000
	I0923 13:13:00.199802    1580 command_runner.go:130] >         }
	I0923 13:13:00.199802    1580 command_runner.go:130] >         cache 30
	I0923 13:13:00.199802    1580 command_runner.go:130] >         loop
	I0923 13:13:00.199802    1580 command_runner.go:130] >         reload
	I0923 13:13:00.199802    1580 command_runner.go:130] >         loadbalance
	I0923 13:13:00.199802    1580 command_runner.go:130] >     }
	I0923 13:13:00.199873    1580 command_runner.go:130] > kind: ConfigMap
	I0923 13:13:00.199873    1580 command_runner.go:130] > metadata:
	I0923 13:13:00.199873    1580 command_runner.go:130] >   creationTimestamp: "2024-09-23T13:12:54Z"
	I0923 13:13:00.199873    1580 command_runner.go:130] >   name: coredns
	I0923 13:13:00.199938    1580 command_runner.go:130] >   namespace: kube-system
	I0923 13:13:00.199938    1580 command_runner.go:130] >   resourceVersion: "263"
	I0923 13:13:00.199938    1580 command_runner.go:130] >   uid: 7d414f2b-f854-48a9-8177-c85f1bf3308c
	I0923 13:13:00.200144    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.144.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 13:13:00.330097    1580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:13:00.684840    1580 command_runner.go:130] > configmap/coredns replaced
	I0923 13:13:00.684918    1580 start.go:971] {"host.minikube.internal": 172.19.144.1} host record injected into CoreDNS's ConfigMap
	I0923 13:13:00.686813    1580 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:13:00.688153    1580 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.153.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:13:00.689746    1580 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:13:00.689746    1580 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.153.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:13:00.690365    1580 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 13:13:00.691163    1580 node_ready.go:35] waiting up to 6m0s for node "multinode-560300" to be "Ready" ...
	I0923 13:13:00.691163    1580 round_trippers.go:463] GET https://172.19.153.215:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0923 13:13:00.691229    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:00.691263    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:00.691263    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:00.691263    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:00.691370    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:00.691370    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:00.691370    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:00.705214    1580 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0923 13:13:00.705214    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:00.705214    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:00.705214    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:00.705214    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:00 GMT
	I0923 13:13:00.705214    1580 round_trippers.go:580]     Audit-Id: 83a6c1ed-f81e-48a6-a89b-59bb0cbb54d6
	I0923 13:13:00.705214    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:00.705214    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:00.705214    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:00.706260    1580 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 13:13:00.706260    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:00.706260    1580 round_trippers.go:580]     Audit-Id: f884ae45-1a0f-4352-984c-da3ecb877042
	I0923 13:13:00.706260    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:00.706260    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:00.706260    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:00.706260    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:00.706260    1580 round_trippers.go:580]     Content-Length: 291
	I0923 13:13:00.706260    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:00 GMT
	I0923 13:13:00.706260    1580 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a2f92d5e-f034-42b5-927b-6035318008ff","resourceVersion":"382","creationTimestamp":"2024-09-23T13:12:54Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0923 13:13:00.706260    1580 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a2f92d5e-f034-42b5-927b-6035318008ff","resourceVersion":"382","creationTimestamp":"2024-09-23T13:12:54Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0923 13:13:00.707234    1580 round_trippers.go:463] PUT https://172.19.153.215:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0923 13:13:00.707234    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:00.707234    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:00.707234    1580 round_trippers.go:473]     Content-Type: application/json
	I0923 13:13:00.707234    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:00.727243    1580 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0923 13:13:00.727343    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:00.727343    1580 round_trippers.go:580]     Audit-Id: 89445a72-91d6-425e-9d32-7754fd35b6ae
	I0923 13:13:00.727343    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:00.727343    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:00.727343    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:00.727343    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:00.727343    1580 round_trippers.go:580]     Content-Length: 291
	I0923 13:13:00.727419    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:00 GMT
	I0923 13:13:00.727461    1580 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a2f92d5e-f034-42b5-927b-6035318008ff","resourceVersion":"384","creationTimestamp":"2024-09-23T13:12:54Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0923 13:13:01.191278    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:01.191278    1580 round_trippers.go:463] GET https://172.19.153.215:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0923 13:13:01.191278    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:01.191278    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:01.191278    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:01.191278    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:01.191278    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:01.191278    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:01.197188    1580 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:13:01.197188    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:01.197188    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:01 GMT
	I0923 13:13:01.197188    1580 round_trippers.go:580]     Audit-Id: 2a693ad9-956b-4e59-bd20-c41473009f49
	I0923 13:13:01.197188    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:01.197287    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:01.197287    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:01.197287    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:01.197287    1580 round_trippers.go:580]     Content-Length: 291
	I0923 13:13:01.197335    1580 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a2f92d5e-f034-42b5-927b-6035318008ff","resourceVersion":"394","creationTimestamp":"2024-09-23T13:12:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0923 13:13:01.197518    1580 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-560300" context rescaled to 1 replicas
	I0923 13:13:01.205801    1580 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 13:13:01.205801    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:01.205801    1580 round_trippers.go:580]     Audit-Id: 8f4e9f16-a163-44c2-aa55-4caca6312b4e
	I0923 13:13:01.205801    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:01.205801    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:01.205801    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:01.205801    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:01.205801    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:01 GMT
	I0923 13:13:01.205801    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:01.692355    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:01.692355    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:01.692355    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:01.692355    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:01.696685    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:01.696685    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:01.696755    1580 round_trippers.go:580]     Audit-Id: 440673c1-3c83-4eae-a58d-cbf8c20a0696
	I0923 13:13:01.696755    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:01.696755    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:01.696755    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:01.696755    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:01.696755    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:01 GMT
	I0923 13:13:01.697593    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:02.039698    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:13:02.040408    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:02.041291    1580 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:13:02.041824    1580 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.153.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:13:02.042490    1580 addons.go:234] Setting addon default-storageclass=true in "multinode-560300"
	I0923 13:13:02.042620    1580 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:13:02.043440    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:13:02.043440    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:13:02.043440    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:02.046130    1580 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:13:02.047731    1580 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:13:02.047731    1580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 13:13:02.047731    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:13:02.191746    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:02.191746    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:02.191746    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:02.191746    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:02.195758    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:02.195758    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:02.195758    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:02.195758    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:02 GMT
	I0923 13:13:02.195758    1580 round_trippers.go:580]     Audit-Id: 349310ab-faf5-455f-9b16-cfe453a6fd49
	I0923 13:13:02.195758    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:02.195758    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:02.195758    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:02.196234    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:02.691911    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:02.691911    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:02.691911    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:02.691911    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:02.696016    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:02.696016    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:02.696016    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:02.696016    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:02 GMT
	I0923 13:13:02.696276    1580 round_trippers.go:580]     Audit-Id: 66271aa2-2ff7-4eb5-8b0d-0ab244bf274b
	I0923 13:13:02.696276    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:02.696276    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:02.696276    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:02.696336    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:02.697185    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:03.192408    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:03.192408    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:03.192408    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:03.192408    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:03.196556    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:03.196556    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:03.196556    1580 round_trippers.go:580]     Audit-Id: 500f96f9-10fa-4c76-9091-cbcbf7e26834
	I0923 13:13:03.196556    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:03.196556    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:03.196556    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:03.196556    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:03.196556    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:03 GMT
	I0923 13:13:03.197770    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:03.691767    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:03.691767    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:03.691767    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:03.691767    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:03.695505    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:03.695505    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:03.695668    1580 round_trippers.go:580]     Audit-Id: 41b81f9e-2a89-4226-b1c9-9ab43c791ddf
	I0923 13:13:03.695668    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:03.695668    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:03.695668    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:03.695668    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:03.695668    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:03 GMT
	I0923 13:13:03.696013    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:04.143217    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:13:04.143217    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:04.143299    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:13:04.160332    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:13:04.160332    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:04.160332    1580 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 13:13:04.160332    1580 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 13:13:04.160332    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:13:04.191963    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:04.191963    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:04.191963    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:04.191963    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:04.195501    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:04.195501    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:04.195501    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:04.195501    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:04 GMT
	I0923 13:13:04.195501    1580 round_trippers.go:580]     Audit-Id: b1fdf4b1-bb14-4448-980f-61d2700f730b
	I0923 13:13:04.195501    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:04.195501    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:04.195501    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:04.195501    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:04.692505    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:04.692505    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:04.692505    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:04.692505    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:04.695594    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:04.695594    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:04.695594    1580 round_trippers.go:580]     Audit-Id: b761c1d9-0ad4-4e83-9d57-027f73fe8d8c
	I0923 13:13:04.695594    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:04.695594    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:04.695594    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:04.695594    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:04.695594    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:04 GMT
	I0923 13:13:04.695594    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:05.192265    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:05.192265    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:05.192265    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:05.192265    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:05.195980    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:05.196073    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:05.196073    1580 round_trippers.go:580]     Audit-Id: 5cbf4cfd-d080-4534-a4df-69123daae586
	I0923 13:13:05.196073    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:05.196073    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:05.196073    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:05.196073    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:05.196073    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:05 GMT
	I0923 13:13:05.197423    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:05.198379    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:05.691987    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:05.691987    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:05.691987    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:05.691987    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:05.917699    1580 round_trippers.go:574] Response Status: 200 OK in 225 milliseconds
	I0923 13:13:05.917699    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:05.917699    1580 round_trippers.go:580]     Audit-Id: 65015ac2-3018-4982-96d8-75c87fd1051d
	I0923 13:13:05.917699    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:05.917699    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:05.917699    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:05.917699    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:05.917699    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:06 GMT
	I0923 13:13:05.917699    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:06.191723    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:06.191723    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:06.191723    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:06.191723    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:06.194895    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:06.194969    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:06.194969    1580 round_trippers.go:580]     Audit-Id: 12311b20-938e-46d8-a100-0096dbcd1b6d
	I0923 13:13:06.194969    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:06.195047    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:06.195047    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:06.195047    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:06.195047    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:06 GMT
	I0923 13:13:06.195640    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:06.210719    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:13:06.210987    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:06.210987    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:13:06.573562    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:13:06.574587    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:06.574645    1580 sshutil.go:53] new ssh client: &{IP:172.19.153.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:13:06.692577    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:06.692924    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:06.692924    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:06.692924    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:06.718700    1580 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0923 13:13:06.718700    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:06.718700    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:06.718700    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:06 GMT
	I0923 13:13:06.718700    1580 round_trippers.go:580]     Audit-Id: 257c39aa-6d67-4d30-8cd0-77e5bf4ec563
	I0923 13:13:06.718700    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:06.718700    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:06.718700    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:06.718700    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:06.719880    1580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 13:13:07.192621    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:07.192621    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:07.192621    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:07.192621    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:07.364655    1580 round_trippers.go:574] Response Status: 200 OK in 171 milliseconds
	I0923 13:13:07.364718    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:07.364718    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:07.364718    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:07.364718    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:07.364718    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:07.364718    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:07 GMT
	I0923 13:13:07.364718    1580 round_trippers.go:580]     Audit-Id: 86629d23-b3d3-4dd4-b5b1-a0ac18610a62
	I0923 13:13:07.364936    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:07.365747    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:07.424120    1580 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0923 13:13:07.424120    1580 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0923 13:13:07.424120    1580 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0923 13:13:07.424120    1580 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0923 13:13:07.424120    1580 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0923 13:13:07.424120    1580 command_runner.go:130] > pod/storage-provisioner created
	I0923 13:13:07.692221    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:07.692221    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:07.692221    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:07.692221    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:07.695311    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:07.696203    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:07.696203    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:07.696203    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:07.696203    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:07.696203    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:07.696203    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:07 GMT
	I0923 13:13:07.696203    1580 round_trippers.go:580]     Audit-Id: f8f3474f-d88f-4402-bf0a-da4ad90c7c63
	I0923 13:13:07.696480    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:08.192480    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:08.192480    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:08.192480    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:08.192480    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:08.196774    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:08.196774    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:08.196774    1580 round_trippers.go:580]     Audit-Id: 270a2fa8-e244-4666-a495-9149407b4c2e
	I0923 13:13:08.196774    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:08.196774    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:08.196774    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:08.196774    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:08.196774    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:08 GMT
	I0923 13:13:08.197378    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:08.539268    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:13:08.539420    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:08.539977    1580 sshutil.go:53] new ssh client: &{IP:172.19.153.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:13:08.661339    1580 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 13:13:08.692324    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:08.692324    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:08.692324    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:08.692324    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:08.696322    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:08.696322    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:08.696322    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:08.696322    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:08.696322    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:08.696322    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:08.696322    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:08 GMT
	I0923 13:13:08.696322    1580 round_trippers.go:580]     Audit-Id: 4c8e59df-b494-4e83-a658-90689bba2c4f
	I0923 13:13:08.696322    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:08.802545    1580 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0923 13:13:08.802545    1580 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 13:13:08.802545    1580 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 13:13:08.802545    1580 round_trippers.go:463] GET https://172.19.153.215:8443/apis/storage.k8s.io/v1/storageclasses
	I0923 13:13:08.802545    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:08.802545    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:08.802545    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:08.805651    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:08.806468    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:08.806468    1580 round_trippers.go:580]     Audit-Id: b7ca2a5f-bb8e-4e0f-82a0-9730bbace62b
	I0923 13:13:08.806468    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:08.806468    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:08.806468    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:08.806468    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:08.806468    1580 round_trippers.go:580]     Content-Length: 1273
	I0923 13:13:08.806468    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:09 GMT
	I0923 13:13:08.806468    1580 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"standard","uid":"823691eb-7cc3-459a-be7d-6403829b37e2","resourceVersion":"420","creationTimestamp":"2024-09-23T13:13:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T13:13:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0923 13:13:08.807032    1580 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"823691eb-7cc3-459a-be7d-6403829b37e2","resourceVersion":"420","creationTimestamp":"2024-09-23T13:13:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T13:13:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0923 13:13:08.807183    1580 round_trippers.go:463] PUT https://172.19.153.215:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 13:13:08.807183    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:08.807183    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:08.807183    1580 round_trippers.go:473]     Content-Type: application/json
	I0923 13:13:08.807183    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:08.813917    1580 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:13:08.813917    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:08.813917    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:09 GMT
	I0923 13:13:08.813917    1580 round_trippers.go:580]     Audit-Id: a29eb8a3-7f4f-485c-9423-3c9a100b33b6
	I0923 13:13:08.813917    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:08.813917    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:08.813917    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:08.813917    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:08.813917    1580 round_trippers.go:580]     Content-Length: 1220
	I0923 13:13:08.813917    1580 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"823691eb-7cc3-459a-be7d-6403829b37e2","resourceVersion":"420","creationTimestamp":"2024-09-23T13:13:09Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T13:13:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0923 13:13:08.819299    1580 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0923 13:13:08.820861    1580 addons.go:510] duration metric: took 8.7890045s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0923 13:13:09.192596    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:09.193297    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:09.193297    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:09.193297    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:09.198165    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:09.198274    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:09.198274    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:09.198274    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:09.198274    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:09.198274    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:09.198274    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:09 GMT
	I0923 13:13:09.198274    1580 round_trippers.go:580]     Audit-Id: 37304c8e-472e-457d-9fda-0a06454e3933
	I0923 13:13:09.198680    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:09.692915    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:09.693468    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:09.693468    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:09.693468    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:09.696921    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:09.697006    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:09.697006    1580 round_trippers.go:580]     Audit-Id: b034acac-d481-4212-80b8-ca347d59b338
	I0923 13:13:09.697006    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:09.697006    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:09.697006    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:09.697079    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:09.697079    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:09 GMT
	I0923 13:13:09.697432    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:09.698113    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:10.192618    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:10.192618    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:10.192618    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:10.192618    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:10.197227    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:10.197227    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:10.197227    1580 round_trippers.go:580]     Audit-Id: bfc55ed6-3d31-48ec-9c85-945169adb1cd
	I0923 13:13:10.197227    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:10.197227    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:10.197227    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:10.197227    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:10.197227    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:10 GMT
	I0923 13:13:10.197467    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:10.692337    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:10.692682    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:10.692682    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:10.692782    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:10.699612    1580 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:13:10.699612    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:10.699612    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:10 GMT
	I0923 13:13:10.699612    1580 round_trippers.go:580]     Audit-Id: 7a8ac32b-a6ad-4a65-9b32-ecf575bd2503
	I0923 13:13:10.699612    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:10.699612    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:10.699612    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:10.699612    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:10.700232    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:11.192593    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:11.192593    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:11.192593    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:11.192593    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:11.196701    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:11.196701    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:11.196701    1580 round_trippers.go:580]     Audit-Id: eba7b6de-0491-4d3e-b5ab-83d9fe17cb62
	I0923 13:13:11.196701    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:11.196701    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:11.196701    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:11.196701    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:11.196701    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:11 GMT
	I0923 13:13:11.197165    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:11.692883    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:11.693357    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:11.693357    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:11.693357    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:11.697167    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:11.697167    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:11.697167    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:11.697167    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:11 GMT
	I0923 13:13:11.697167    1580 round_trippers.go:580]     Audit-Id: a2306a36-40d6-4ae5-b9de-40b535e01d17
	I0923 13:13:11.697167    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:11.697167    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:11.697167    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:11.697438    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:12.192583    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:12.192583    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:12.192583    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:12.192583    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:12.197057    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:12.197151    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:12.197151    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:12.197151    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:12.197151    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:12 GMT
	I0923 13:13:12.197151    1580 round_trippers.go:580]     Audit-Id: ba636b63-4e4d-4757-ae37-6ad75a51bdc3
	I0923 13:13:12.197151    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:12.197253    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:12.197696    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:12.198406    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:12.693023    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:12.693023    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:12.693023    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:12.693023    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:12.697117    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:12.697117    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:12.697687    1580 round_trippers.go:580]     Audit-Id: 1e97fda4-0119-4a7b-a35f-51f20a67bca8
	I0923 13:13:12.697687    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:12.697687    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:12.697687    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:12.697687    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:12.697687    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:12 GMT
	I0923 13:13:12.698624    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:13.192267    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:13.192267    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:13.192267    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:13.192267    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:13.200039    1580 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 13:13:13.200039    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:13.200039    1580 round_trippers.go:580]     Audit-Id: 208e2d77-a3dd-42f3-a003-7aaa7c8511b2
	I0923 13:13:13.200039    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:13.200039    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:13.200039    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:13.200039    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:13.200039    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:13 GMT
	I0923 13:13:13.200039    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:13.693126    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:13.693126    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:13.693126    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:13.693126    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:13.697276    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:13.697276    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:13.697276    1580 round_trippers.go:580]     Audit-Id: eda4052e-2d6b-4248-a826-4409765eb8e6
	I0923 13:13:13.697276    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:13.697276    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:13.697276    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:13.697548    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:13.697548    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:13 GMT
	I0923 13:13:13.697774    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:14.192819    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:14.192819    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:14.192819    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:14.192819    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:14.197560    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:14.197560    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:14.197673    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:14.197673    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:14.197673    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:14.197673    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:14 GMT
	I0923 13:13:14.197673    1580 round_trippers.go:580]     Audit-Id: ecc833cf-413a-4e9c-a089-cb6a15c8a73c
	I0923 13:13:14.197673    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:14.198083    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:14.198914    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:14.693111    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:14.693111    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:14.693111    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:14.693111    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:14.695964    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:14.696924    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:14.696924    1580 round_trippers.go:580]     Audit-Id: b0ec0224-c7f5-4e48-9981-390cdcd1cd22
	I0923 13:13:14.696924    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:14.696924    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:14.696924    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:14.696924    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:14.696924    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:14 GMT
	I0923 13:13:14.697212    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:15.192405    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:15.192405    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:15.192405    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:15.192405    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:15.196576    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:15.196576    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:15.196576    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:15.196576    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:15.196576    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:15.196576    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:15.196576    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:15 GMT
	I0923 13:13:15.196576    1580 round_trippers.go:580]     Audit-Id: dbe8b41b-b951-4987-a239-d6f13330b2af
	I0923 13:13:15.196576    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:15.692772    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:15.692772    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:15.692772    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:15.692772    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:15.695864    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:15.696759    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:15.696759    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:15.696759    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:15.696759    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:15 GMT
	I0923 13:13:15.696759    1580 round_trippers.go:580]     Audit-Id: cf79154c-eed7-4475-9c0a-13c438d00a4a
	I0923 13:13:15.696759    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:15.696759    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:15.697042    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:16.192879    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:16.192879    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:16.192879    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:16.192879    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:16.198165    1580 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:13:16.198291    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:16.198291    1580 round_trippers.go:580]     Audit-Id: 86b0ecf5-6bc0-4147-9528-8e7be8f3a860
	I0923 13:13:16.198291    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:16.198291    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:16.198291    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:16.198291    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:16.198291    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:16 GMT
	I0923 13:13:16.198291    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:16.692710    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:16.692710    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:16.692710    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:16.692710    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:16.695572    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:16.696279    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:16.696279    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:16.696279    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:16.696279    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:16 GMT
	I0923 13:13:16.696279    1580 round_trippers.go:580]     Audit-Id: 6b677aa3-52c8-4550-b474-412aed73992f
	I0923 13:13:16.696279    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:16.696279    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:16.696621    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:16.697264    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:17.193076    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:17.193076    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:17.193076    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:17.193076    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:17.196402    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:17.196402    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:17.196402    1580 round_trippers.go:580]     Audit-Id: a6effd96-960c-4968-9c9f-074c00a54f9f
	I0923 13:13:17.196402    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:17.196402    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:17.196402    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:17.196402    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:17.196402    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:17 GMT
	I0923 13:13:17.196730    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:17.693014    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:17.693096    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:17.693096    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:17.693096    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:17.696329    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:17.696627    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:17.696627    1580 round_trippers.go:580]     Audit-Id: 8f8f9c82-4ce0-4f13-a5fa-567626a49d26
	I0923 13:13:17.696627    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:17.696627    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:17.696627    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:17.696627    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:17.696627    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:17 GMT
	I0923 13:13:17.696728    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:18.192930    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:18.192930    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:18.192930    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:18.192930    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:18.196872    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:18.196872    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:18.196872    1580 round_trippers.go:580]     Audit-Id: 65ccf38d-3b66-4b42-9074-fbf47c2a1356
	I0923 13:13:18.196872    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:18.196872    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:18.196872    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:18.196872    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:18.196872    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:18 GMT
	I0923 13:13:18.197119    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:18.693592    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:18.693592    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:18.693685    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:18.693685    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:18.697523    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:18.697523    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:18.697523    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:18.697523    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:18.697523    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:18 GMT
	I0923 13:13:18.697607    1580 round_trippers.go:580]     Audit-Id: 2bcfc2f4-5413-445e-a4d9-41a173f43dad
	I0923 13:13:18.697607    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:18.697607    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:18.697809    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:18.697809    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:19.193608    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:19.193656    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:19.193656    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:19.193656    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:19.200592    1580 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:13:19.200653    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:19.200728    1580 round_trippers.go:580]     Audit-Id: 24e97531-61f6-4660-a86b-c11041533da1
	I0923 13:13:19.200750    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:19.200750    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:19.200750    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:19.200750    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:19.200750    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:19 GMT
	I0923 13:13:19.200750    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:19.693829    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:19.693829    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:19.693829    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:19.693829    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:19.697549    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:19.697549    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:19.697549    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:19.697549    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:19 GMT
	I0923 13:13:19.697549    1580 round_trippers.go:580]     Audit-Id: b6f06338-6bd9-4803-b8b0-4f0a8bc2ae78
	I0923 13:13:19.697549    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:19.697549    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:19.697549    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:19.697651    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:20.193256    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:20.193256    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:20.193256    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:20.193256    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:20.197091    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:20.197091    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:20.197091    1580 round_trippers.go:580]     Audit-Id: 6d3d563a-0302-4ace-ac59-54a9fa1eef15
	I0923 13:13:20.197091    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:20.197091    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:20.197091    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:20.197091    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:20.197091    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:20 GMT
	I0923 13:13:20.197193    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:20.694171    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:20.694171    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:20.694247    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:20.694247    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:20.698028    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:20.698028    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:20.698028    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:20.698028    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:20 GMT
	I0923 13:13:20.698028    1580 round_trippers.go:580]     Audit-Id: 5f3cf60c-20c1-47de-9d88-7908fa5bb76f
	I0923 13:13:20.698028    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:20.698028    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:20.698028    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:20.698804    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:20.699211    1580 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:13:21.193267    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:21.193267    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:21.193267    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:21.193267    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:21.197087    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:21.197603    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:21.197603    1580 round_trippers.go:580]     Audit-Id: bb740f98-b8f8-4f65-bff4-1473c2b6bc3c
	I0923 13:13:21.197603    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:21.197603    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:21.197603    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:21.197603    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:21.197603    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:21 GMT
	I0923 13:13:21.197960    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:21.693677    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:21.693677    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:21.693677    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:21.693677    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:21.696895    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:21.697385    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:21.697385    1580 round_trippers.go:580]     Audit-Id: d92586a7-6777-4220-bdbe-605bae411e40
	I0923 13:13:21.697385    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:21.697385    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:21.697385    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:21.697385    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:21.697385    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:21 GMT
	I0923 13:13:21.697792    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"370","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0923 13:13:22.193460    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:22.193460    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:22.193460    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:22.193460    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:22.200859    1580 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 13:13:22.200859    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:22.200859    1580 round_trippers.go:580]     Audit-Id: d1480d28-58a2-4472-9a55-1cd5f123b714
	I0923 13:13:22.200859    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:22.200937    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:22.200937    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:22.200937    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:22.200937    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:22 GMT
	I0923 13:13:22.201059    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:22.201511    1580 node_ready.go:49] node "multinode-560300" has status "Ready":"True"
	I0923 13:13:22.201511    1580 node_ready.go:38] duration metric: took 21.5088303s for node "multinode-560300" to be "Ready" ...
	I0923 13:13:22.201511    1580 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:13:22.201511    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods
	I0923 13:13:22.201511    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:22.201511    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:22.201511    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:22.204464    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:22.204464    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:22.204464    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:22 GMT
	I0923 13:13:22.204464    1580 round_trippers.go:580]     Audit-Id: 0044d8a0-6966-413f-8ce1-933ba606dd7c
	I0923 13:13:22.204464    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:22.204464    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:22.205479    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:22.205479    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:22.205479    1580 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"432","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57862 chars]
	I0923 13:13:22.210652    1580 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:22.210807    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:13:22.210807    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:22.210807    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:22.210807    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:22.213585    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:22.213585    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:22.213585    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:22 GMT
	I0923 13:13:22.213585    1580 round_trippers.go:580]     Audit-Id: eb229dcc-e066-4b1e-b23d-b8102887e944
	I0923 13:13:22.213585    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:22.213585    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:22.213585    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:22.213585    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:22.213585    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"432","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I0923 13:13:22.214565    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:22.214565    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:22.214565    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:22.214565    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:22.217228    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:22.217228    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:22.217228    1580 round_trippers.go:580]     Audit-Id: ac99e3f2-ed1a-440a-b9f8-1c25bcee8e2e
	I0923 13:13:22.217573    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:22.217573    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:22.217573    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:22.217573    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:22.217612    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:22 GMT
	I0923 13:13:22.217639    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:22.711017    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:13:22.711017    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:22.711017    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:22.711017    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:22.714007    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:22.714942    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:22.714942    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:22.714942    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:22.714942    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:22.714942    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:22.715018    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:22 GMT
	I0923 13:13:22.715018    1580 round_trippers.go:580]     Audit-Id: a4671982-66ed-4ec2-bc74-efbc8255c0a8
	I0923 13:13:22.715244    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"432","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I0923 13:13:22.715899    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:22.715899    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:22.715899    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:22.715988    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:22.718004    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:22.718758    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:22.718758    1580 round_trippers.go:580]     Audit-Id: 6da62f1d-9e29-46de-9206-bc5f160ab2e7
	I0923 13:13:22.718758    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:22.718758    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:22.718758    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:22.718758    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:22.718758    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:22 GMT
	I0923 13:13:22.719133    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:23.211497    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:13:23.211497    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:23.211497    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:23.211497    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:23.214481    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:23.215308    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:23.215308    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:23.215308    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:23.215308    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:23 GMT
	I0923 13:13:23.215308    1580 round_trippers.go:580]     Audit-Id: 0b3d7291-eac3-44ad-99a8-0495dac6b257
	I0923 13:13:23.215308    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:23.215308    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:23.216229    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"432","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I0923 13:13:23.216724    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:23.216724    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:23.216724    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:23.216724    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:23.220471    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:23.220471    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:23.220471    1580 round_trippers.go:580]     Audit-Id: 74d26cf6-583f-40bd-8489-08f61acbe045
	I0923 13:13:23.221168    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:23.221168    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:23.221168    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:23.221168    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:23.221168    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:23 GMT
	I0923 13:13:23.221297    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:23.711761    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:13:23.711842    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:23.711842    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:23.711842    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:23.716263    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:23.716303    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:23.716303    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:23.716303    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:23 GMT
	I0923 13:13:23.716303    1580 round_trippers.go:580]     Audit-Id: ccfbea1c-4e2a-48e0-8e31-88b420c0319a
	I0923 13:13:23.716303    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:23.716303    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:23.716303    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:23.717080    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"432","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I0923 13:13:23.717509    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:23.718034    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:23.718034    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:23.718034    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:23.721357    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:23.721357    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:23.721357    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:23 GMT
	I0923 13:13:23.721438    1580 round_trippers.go:580]     Audit-Id: e9e540ce-cd35-4f72-8e2c-9bd5304f897b
	I0923 13:13:23.721438    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:23.721438    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:23.721438    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:23.721438    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:23.721519    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:24.210931    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:13:24.210931    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.210931    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.210931    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.215614    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:24.215730    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.215730    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.215730    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.215730    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.215730    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.215833    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.215833    1580 round_trippers.go:580]     Audit-Id: 351519d0-683c-4ddd-a483-db788f625a62
	I0923 13:13:24.216020    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"444","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0923 13:13:24.217371    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:24.217371    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.217454    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.217454    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.219796    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:24.220520    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.220520    1580 round_trippers.go:580]     Audit-Id: 7f738425-2f31-4ffb-8e25-9c73a752c597
	I0923 13:13:24.220520    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.220520    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.220520    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.220520    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.220520    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.220670    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:24.221056    1580 pod_ready.go:93] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"True"
	I0923 13:13:24.221056    1580 pod_ready.go:82] duration metric: took 2.01019s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.221056    1580 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.221159    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:13:24.221159    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.221159    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.221159    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.222830    1580 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 13:13:24.222830    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.222830    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.222830    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.222830    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.222830    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.222830    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.223603    1580 round_trippers.go:580]     Audit-Id: 11464e9e-1917-446b-bacd-bac799b1cae3
	I0923 13:13:24.223808    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"67f0bcb0-9d38-4450-9001-134a810ba113","resourceVersion":"368","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.153.215:2379","kubernetes.io/config.hash":"8712c4ce8da12187fec77f2ae7f14852","kubernetes.io/config.mirror":"8712c4ce8da12187fec77f2ae7f14852","kubernetes.io/config.seen":"2024-09-23T13:12:54.655467491Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6476 chars]
	I0923 13:13:24.224320    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:24.224363    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.224363    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.224363    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.226495    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:24.226495    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.226495    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.226495    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.226495    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.227166    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.227166    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.227222    1580 round_trippers.go:580]     Audit-Id: 51a4dea2-ec0a-4fd5-b630-90fa51b2201d
	I0923 13:13:24.227414    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:24.228054    1580 pod_ready.go:93] pod "etcd-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:13:24.228054    1580 pod_ready.go:82] duration metric: took 6.9302ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.228117    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.228267    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:13:24.228267    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.228324    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.228324    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.230573    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:24.230573    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.230573    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.230573    1580 round_trippers.go:580]     Audit-Id: 3fa0d831-9cb8-460c-86e8-9e7e11a9d09b
	I0923 13:13:24.230862    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.230862    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.230862    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.230908    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.231183    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"5a468385-fdb9-4c85-b241-6cee87e52d9c","resourceVersion":"406","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.153.215:8443","kubernetes.io/config.hash":"013b4f74438b81d3e778f9e09be4f2f0","kubernetes.io/config.mirror":"013b4f74438b81d3e778f9e09be4f2f0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655472192Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0923 13:13:24.231983    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:24.232026    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.232076    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.232125    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.234358    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:24.234358    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.234358    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.234358    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.234358    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.234358    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.234358    1580 round_trippers.go:580]     Audit-Id: e7888b31-2572-4a91-b849-a9d3b2553027
	I0923 13:13:24.234358    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.234358    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:24.234358    1580 pod_ready.go:93] pod "kube-apiserver-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:13:24.234358    1580 pod_ready.go:82] duration metric: took 6.2412ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.234358    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.234358    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:13:24.234358    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.234358    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.234358    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.237042    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:24.237042    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.237042    1580 round_trippers.go:580]     Audit-Id: d811b9f5-75db-4b4b-87a4-3dbf5e5158ed
	I0923 13:13:24.237042    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.237042    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.237042    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.237042    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.237042    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.237042    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"365","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0923 13:13:24.238064    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:24.238064    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.238064    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.238064    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.241250    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:24.241250    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.241250    1580 round_trippers.go:580]     Audit-Id: 64fe95fa-6d0a-46a8-9d96-629d82791867
	I0923 13:13:24.241250    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.241486    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.241486    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.241486    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.241486    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.241661    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:24.241661    1580 pod_ready.go:93] pod "kube-controller-manager-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:13:24.241661    1580 pod_ready.go:82] duration metric: took 7.3021ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.241661    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.241661    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:13:24.241661    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.242185    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.242290    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.244965    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:24.244965    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.244965    1580 round_trippers.go:580]     Audit-Id: 7e8975a8-5914-4670-909d-ef1a08c5a9db
	I0923 13:13:24.244965    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.244965    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.244965    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.244965    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.244965    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.244965    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"401","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6199 chars]
	I0923 13:13:24.245831    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:24.245892    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.245892    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.245892    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.248568    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:24.248568    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.248568    1580 round_trippers.go:580]     Audit-Id: 1a634ee7-4f3e-41db-be6f-0637a95f931b
	I0923 13:13:24.248568    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.248568    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.248568    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.248568    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.248568    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.248568    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:24.248568    1580 pod_ready.go:93] pod "kube-proxy-rgmcw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:13:24.248568    1580 pod_ready.go:82] duration metric: took 6.9066ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.248568    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.411114    1580 request.go:632] Waited for 162.5353ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:13:24.411114    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:13:24.411114    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.411114    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.411114    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.414834    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:24.414834    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.414834    1580 round_trippers.go:580]     Audit-Id: 3237aa07-8c17-403b-af2a-157a8e8a85c5
	I0923 13:13:24.414834    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.414834    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.414834    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.414834    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.414834    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.415206    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"405","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0923 13:13:24.611097    1580 request.go:632] Waited for 194.8471ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:24.611403    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:13:24.611403    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.611403    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.611403    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.613961    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:13:24.613961    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.613961    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.613961    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.613961    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.613961    1580 round_trippers.go:580]     Audit-Id: d1191ca1-7a63-4304-a7a2-e31852d7c84b
	I0923 13:13:24.613961    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.613961    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.615096    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0923 13:13:24.615196    1580 pod_ready.go:93] pod "kube-scheduler-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:13:24.615196    1580 pod_ready.go:82] duration metric: took 366.6036ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:13:24.615196    1580 pod_ready.go:39] duration metric: took 2.4135223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:13:24.615196    1580 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:13:24.624245    1580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:13:24.649041    1580 command_runner.go:130] > 2046
	I0923 13:13:24.649041    1580 api_server.go:72] duration metric: took 24.6161157s to wait for apiserver process to appear ...
	I0923 13:13:24.649041    1580 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:13:24.649041    1580 api_server.go:253] Checking apiserver healthz at https://172.19.153.215:8443/healthz ...
	I0923 13:13:24.657243    1580 api_server.go:279] https://172.19.153.215:8443/healthz returned 200:
	ok
	I0923 13:13:24.657780    1580 round_trippers.go:463] GET https://172.19.153.215:8443/version
	I0923 13:13:24.657808    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.657808    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.657808    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.658251    1580 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 13:13:24.659260    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.659260    1580 round_trippers.go:580]     Audit-Id: 739b0f51-cc6d-44f2-97f5-8caf177a6216
	I0923 13:13:24.659260    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.659260    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.659294    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.659294    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.659294    1580 round_trippers.go:580]     Content-Length: 263
	I0923 13:13:24.659294    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:24 GMT
	I0923 13:13:24.659294    1580 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 13:13:24.659294    1580 api_server.go:141] control plane version: v1.31.1
	I0923 13:13:24.659294    1580 api_server.go:131] duration metric: took 10.2521ms to wait for apiserver health ...
	I0923 13:13:24.659294    1580 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:13:24.811726    1580 request.go:632] Waited for 152.4225ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods
	I0923 13:13:24.811726    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods
	I0923 13:13:24.811726    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:24.811726    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:24.811726    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:24.816951    1580 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:13:24.816951    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:24.817051    1580 round_trippers.go:580]     Audit-Id: 284dc2a9-3ac8-4395-b9d9-407eccc1189a
	I0923 13:13:24.817051    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:24.817051    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:24.817051    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:24.817051    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:24.817051    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:25 GMT
	I0923 13:13:24.817379    1580 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"444","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57976 chars]
	I0923 13:13:24.819722    1580 system_pods.go:59] 8 kube-system pods found
	I0923 13:13:24.819722    1580 system_pods.go:61] "coredns-7c65d6cfc9-glx94" [f476c8f8-667a-48d4-84f8-4aa15336cea9] Running
	I0923 13:13:24.819722    1580 system_pods.go:61] "etcd-multinode-560300" [67f0bcb0-9d38-4450-9001-134a810ba113] Running
	I0923 13:13:24.819722    1580 system_pods.go:61] "kindnet-mdnmc" [ffaf3266-f3b8-424f-888b-15aff927de53] Running
	I0923 13:13:24.819722    1580 system_pods.go:61] "kube-apiserver-multinode-560300" [5a468385-fdb9-4c85-b241-6cee87e52d9c] Running
	I0923 13:13:24.819722    1580 system_pods.go:61] "kube-controller-manager-multinode-560300" [aa0d358b-19fd-4553-8a34-f772ba945019] Running
	I0923 13:13:24.819722    1580 system_pods.go:61] "kube-proxy-rgmcw" [97050e09-6fc3-4e7b-b00e-07eb9332bf15] Running
	I0923 13:13:24.819722    1580 system_pods.go:61] "kube-scheduler-multinode-560300" [01e5d6a3-2eb6-4fa4-8607-072724fb2880] Running
	I0923 13:13:24.819722    1580 system_pods.go:61] "storage-provisioner" [444d1029-f19d-4fa6-b454-c9c710e6d9b2] Running
	I0923 13:13:24.819722    1580 system_pods.go:74] duration metric: took 160.4179ms to wait for pod list to return data ...
	I0923 13:13:24.819722    1580 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:13:25.011134    1580 request.go:632] Waited for 191.399ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/namespaces/default/serviceaccounts
	I0923 13:13:25.011134    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/default/serviceaccounts
	I0923 13:13:25.011134    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:25.011134    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:25.011134    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:25.014649    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:25.015528    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:25.015528    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:25.015528    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:25.015528    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:25.015528    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:25.015528    1580 round_trippers.go:580]     Content-Length: 261
	I0923 13:13:25.015528    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:25 GMT
	I0923 13:13:25.015528    1580 round_trippers.go:580]     Audit-Id: 02f52773-e965-41ec-8698-9a1e9841628b
	I0923 13:13:25.015528    1580 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6aaed0f9-99f6-4dde-94ff-d8ba898738d6","resourceVersion":"351","creationTimestamp":"2024-09-23T13:12:59Z"}}]}
	I0923 13:13:25.015911    1580 default_sa.go:45] found service account: "default"
	I0923 13:13:25.015911    1580 default_sa.go:55] duration metric: took 196.1758ms for default service account to be created ...
	I0923 13:13:25.015911    1580 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:13:25.211894    1580 request.go:632] Waited for 195.8507ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods
	I0923 13:13:25.211894    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods
	I0923 13:13:25.211894    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:25.211894    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:25.211894    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:25.216618    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:13:25.216618    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:25.217120    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:25.217120    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:25.217120    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:25 GMT
	I0923 13:13:25.217120    1580 round_trippers.go:580]     Audit-Id: ede56c5f-1403-4b20-83be-8f249d03df10
	I0923 13:13:25.217120    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:25.217120    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:25.218209    1580 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"444","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57976 chars]
	I0923 13:13:25.220864    1580 system_pods.go:86] 8 kube-system pods found
	I0923 13:13:25.220864    1580 system_pods.go:89] "coredns-7c65d6cfc9-glx94" [f476c8f8-667a-48d4-84f8-4aa15336cea9] Running
	I0923 13:13:25.220864    1580 system_pods.go:89] "etcd-multinode-560300" [67f0bcb0-9d38-4450-9001-134a810ba113] Running
	I0923 13:13:25.220864    1580 system_pods.go:89] "kindnet-mdnmc" [ffaf3266-f3b8-424f-888b-15aff927de53] Running
	I0923 13:13:25.220864    1580 system_pods.go:89] "kube-apiserver-multinode-560300" [5a468385-fdb9-4c85-b241-6cee87e52d9c] Running
	I0923 13:13:25.220931    1580 system_pods.go:89] "kube-controller-manager-multinode-560300" [aa0d358b-19fd-4553-8a34-f772ba945019] Running
	I0923 13:13:25.220931    1580 system_pods.go:89] "kube-proxy-rgmcw" [97050e09-6fc3-4e7b-b00e-07eb9332bf15] Running
	I0923 13:13:25.220931    1580 system_pods.go:89] "kube-scheduler-multinode-560300" [01e5d6a3-2eb6-4fa4-8607-072724fb2880] Running
	I0923 13:13:25.220931    1580 system_pods.go:89] "storage-provisioner" [444d1029-f19d-4fa6-b454-c9c710e6d9b2] Running
	I0923 13:13:25.220931    1580 system_pods.go:126] duration metric: took 205.006ms to wait for k8s-apps to be running ...
	I0923 13:13:25.220931    1580 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:13:25.228988    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:13:25.252917    1580 system_svc.go:56] duration metric: took 31.9838ms WaitForService to wait for kubelet
	I0923 13:13:25.252917    1580 kubeadm.go:582] duration metric: took 25.2199513s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:13:25.252917    1580 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:13:25.411223    1580 request.go:632] Waited for 158.2956ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/nodes
	I0923 13:13:25.411536    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes
	I0923 13:13:25.411536    1580 round_trippers.go:469] Request Headers:
	I0923 13:13:25.411536    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:13:25.411536    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:13:25.415229    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:13:25.415316    1580 round_trippers.go:577] Response Headers:
	I0923 13:13:25.415316    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:13:25 GMT
	I0923 13:13:25.415316    1580 round_trippers.go:580]     Audit-Id: c171dc11-89ef-4084-9627-4a61eb30b109
	I0923 13:13:25.415387    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:13:25.415387    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:13:25.415387    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:13:25.415387    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:13:25.415695    1580 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"426","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0923 13:13:25.416490    1580 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:13:25.416570    1580 node_conditions.go:123] node cpu capacity is 2
	I0923 13:13:25.416647    1580 node_conditions.go:105] duration metric: took 163.7189ms to run NodePressure ...
	I0923 13:13:25.416647    1580 start.go:241] waiting for startup goroutines ...
	I0923 13:13:25.416647    1580 start.go:246] waiting for cluster config update ...
	I0923 13:13:25.416647    1580 start.go:255] writing updated cluster config ...
	I0923 13:13:25.420941    1580 out.go:201] 
	I0923 13:13:25.424372    1580 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:13:25.433637    1580 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:13:25.433637    1580 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:13:25.438761    1580 out.go:177] * Starting "multinode-560300-m02" worker node in "multinode-560300" cluster
	I0923 13:13:25.440885    1580 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:13:25.440885    1580 cache.go:56] Caching tarball of preloaded images
	I0923 13:13:25.441843    1580 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 13:13:25.441972    1580 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:13:25.442009    1580 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:13:25.447178    1580 start.go:360] acquireMachinesLock for multinode-560300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:13:25.447971    1580 start.go:364] duration metric: took 793.8µs to acquireMachinesLock for "multinode-560300-m02"
	I0923 13:13:25.448141    1580 start.go:93] Provisioning new machine with config: &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:13:25.448141    1580 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0923 13:13:25.451377    1580 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0923 13:13:25.451652    1580 start.go:159] libmachine.API.Create for "multinode-560300" (driver="hyperv")
	I0923 13:13:25.451707    1580 client.go:168] LocalClient.Create starting
	I0923 13:13:25.451898    1580 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0923 13:13:25.452217    1580 main.go:141] libmachine: Decoding PEM data...
	I0923 13:13:25.452217    1580 main.go:141] libmachine: Parsing certificate...
	I0923 13:13:25.452396    1580 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0923 13:13:25.452563    1580 main.go:141] libmachine: Decoding PEM data...
	I0923 13:13:25.452563    1580 main.go:141] libmachine: Parsing certificate...
	I0923 13:13:25.452682    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0923 13:13:27.138026    1580 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0923 13:13:27.138026    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:27.138127    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0923 13:13:28.678815    1580 main.go:141] libmachine: [stdout =====>] : False
	
	I0923 13:13:28.678815    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:28.678904    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 13:13:30.038290    1580 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 13:13:30.039140    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:30.039196    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 13:13:33.133242    1580 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 13:13:33.133847    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:33.135763    1580 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 13:13:33.519469    1580 main.go:141] libmachine: Creating SSH key...
	I0923 13:13:33.688857    1580 main.go:141] libmachine: Creating VM...
	I0923 13:13:33.688857    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0923 13:13:36.181374    1580 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0923 13:13:36.181374    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:36.181491    1580 main.go:141] libmachine: Using switch "Default Switch"
	I0923 13:13:36.181491    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0923 13:13:37.713678    1580 main.go:141] libmachine: [stdout =====>] : True
	
	I0923 13:13:37.713678    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:37.713749    1580 main.go:141] libmachine: Creating VHD
	I0923 13:13:37.713749    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0923 13:13:41.026595    1580 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 342FF987-A7BE-4A41-8E96-33E55B59A22F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0923 13:13:41.026692    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:41.026692    1580 main.go:141] libmachine: Writing magic tar header
	I0923 13:13:41.026769    1580 main.go:141] libmachine: Writing SSH key tar header
	I0923 13:13:41.034900    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0923 13:13:43.897764    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:13:43.897764    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:43.898418    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\disk.vhd' -SizeBytes 20000MB
	I0923 13:13:46.144644    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:13:46.144644    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:46.144921    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-560300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0923 13:13:49.290043    1580 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-560300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0923 13:13:49.290232    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:49.290232    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-560300-m02 -DynamicMemoryEnabled $false
	I0923 13:13:51.237556    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:13:51.237752    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:51.237828    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-560300-m02 -Count 2
	I0923 13:13:53.095833    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:13:53.095833    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:53.095926    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-560300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\boot2docker.iso'
	I0923 13:13:55.341045    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:13:55.341045    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:55.341305    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-560300-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\disk.vhd'
	I0923 13:13:57.633944    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:13:57.633944    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:13:57.633944    1580 main.go:141] libmachine: Starting VM...
	I0923 13:13:57.633944    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-560300-m02
	I0923 13:14:00.328222    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:14:00.329196    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:00.329196    1580 main.go:141] libmachine: Waiting for host to start...
	I0923 13:14:00.329196    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:02.360945    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:02.361144    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:02.361144    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:04.553121    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:14:04.553121    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:05.553790    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:07.467465    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:07.467465    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:07.467465    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:09.672767    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:14:09.672767    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:10.673336    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:12.546349    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:12.546349    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:12.546349    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:14.703246    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:14:14.704046    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:15.704474    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:17.652449    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:17.652449    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:17.652449    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:19.871510    1580 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:14:19.871510    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:20.872293    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:22.776680    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:22.776680    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:22.776955    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:25.122912    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:14:25.122912    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:25.122912    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:27.014890    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:27.014890    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:27.014890    1580 machine.go:93] provisionDockerMachine start ...
	I0923 13:14:27.014890    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:28.861569    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:28.861569    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:28.862531    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:31.110937    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:14:31.111240    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:31.116981    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:14:31.128512    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.68 22 <nil> <nil>}
	I0923 13:14:31.128512    1580 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:14:31.263803    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:14:31.263873    1580 buildroot.go:166] provisioning hostname "multinode-560300-m02"
	I0923 13:14:31.263873    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:33.094764    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:33.094990    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:33.095088    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:35.304518    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:14:35.304518    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:35.308177    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:14:35.308226    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.68 22 <nil> <nil>}
	I0923 13:14:35.308226    1580 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-560300-m02 && echo "multinode-560300-m02" | sudo tee /etc/hostname
	I0923 13:14:35.455957    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-560300-m02
	
	I0923 13:14:35.455957    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:37.248361    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:37.248361    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:37.249088    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:39.436172    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:14:39.437033    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:39.441192    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:14:39.441774    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.68 22 <nil> <nil>}
	I0923 13:14:39.441774    1580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-560300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-560300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-560300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:14:39.576718    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:14:39.576853    1580 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 13:14:39.576874    1580 buildroot.go:174] setting up certificates
	I0923 13:14:39.576874    1580 provision.go:84] configureAuth start
	I0923 13:14:39.576967    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:41.421853    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:41.421853    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:41.422579    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:43.604503    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:14:43.605221    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:43.605221    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:45.437810    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:45.437810    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:45.437810    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:47.654290    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:14:47.654290    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:47.654290    1580 provision.go:143] copyHostCerts
	I0923 13:14:47.655217    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 13:14:47.655436    1580 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 13:14:47.655436    1580 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 13:14:47.655714    1580 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 13:14:47.656545    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 13:14:47.656685    1580 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 13:14:47.656756    1580 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 13:14:47.656984    1580 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 13:14:47.657597    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 13:14:47.657760    1580 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 13:14:47.657832    1580 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 13:14:47.657909    1580 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 13:14:47.658802    1580 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-560300-m02 san=[127.0.0.1 172.19.147.68 localhost minikube multinode-560300-m02]
	I0923 13:14:47.818045    1580 provision.go:177] copyRemoteCerts
	I0923 13:14:47.825669    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:14:47.825669    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:49.643840    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:49.643840    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:49.643840    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:51.884049    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:14:51.884049    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:51.884480    1580 sshutil.go:53] new ssh client: &{IP:172.19.147.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:14:51.994733    1580 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1687826s)
	I0923 13:14:51.994733    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 13:14:51.995825    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:14:52.039894    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 13:14:52.040134    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0923 13:14:52.081108    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 13:14:52.081108    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:14:52.124152    1580 provision.go:87] duration metric: took 12.5464311s to configureAuth
	I0923 13:14:52.124152    1580 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:14:52.124801    1580 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:14:52.124868    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:53.980105    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:53.980105    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:53.980204    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:14:56.240621    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:14:56.240708    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:56.244629    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:14:56.245159    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.68 22 <nil> <nil>}
	I0923 13:14:56.245159    1580 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:14:56.382215    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 13:14:56.382215    1580 buildroot.go:70] root file system type: tmpfs
	I0923 13:14:56.382215    1580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:14:56.382215    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:14:58.187707    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:14:58.187707    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:14:58.188224    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:00.376656    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:00.377096    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:00.381167    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:15:00.381330    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.68 22 <nil> <nil>}
	I0923 13:15:00.381330    1580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.153.215"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:15:00.542519    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.153.215
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:15:00.542519    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:15:02.459311    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:02.459311    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:02.459311    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:04.710709    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:04.710779    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:04.714423    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:15:04.715038    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.68 22 <nil> <nil>}
	I0923 13:15:04.715038    1580 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:15:06.889118    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 13:15:06.889198    1580 machine.go:96] duration metric: took 39.871617s to provisionDockerMachine
	I0923 13:15:06.889256    1580 client.go:171] duration metric: took 1m41.4307011s to LocalClient.Create
	I0923 13:15:06.889332    1580 start.go:167] duration metric: took 1m41.4307567s to libmachine.API.Create "multinode-560300"
	I0923 13:15:06.889332    1580 start.go:293] postStartSetup for "multinode-560300-m02" (driver="hyperv")
	I0923 13:15:06.889399    1580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:15:06.901763    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:15:06.901763    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:15:08.791513    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:08.792526    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:08.792719    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:11.103690    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:11.104274    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:11.104401    1580 sshutil.go:53] new ssh client: &{IP:172.19.147.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:15:11.211727    1580 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3096734s)
	I0923 13:15:11.220402    1580 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:15:11.226767    1580 command_runner.go:130] > NAME=Buildroot
	I0923 13:15:11.226870    1580 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:15:11.226870    1580 command_runner.go:130] > ID=buildroot
	I0923 13:15:11.226870    1580 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:15:11.226907    1580 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:15:11.226907    1580 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:15:11.226907    1580 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 13:15:11.227436    1580 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 13:15:11.228086    1580 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 13:15:11.228086    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 13:15:11.236281    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:15:11.252589    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 13:15:11.296823    1580 start.go:296] duration metric: took 4.4071941s for postStartSetup
	I0923 13:15:11.298994    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:15:13.163704    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:13.163704    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:13.163704    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:15.493499    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:15.494188    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:15.494188    1580 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:15:15.496190    1580 start.go:128] duration metric: took 1m50.0406208s to createHost
	I0923 13:15:15.496726    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:15:17.351915    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:17.351915    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:17.352527    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:19.560017    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:19.560068    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:19.563712    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:15:19.563712    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.68 22 <nil> <nil>}
	I0923 13:15:19.563712    1580 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:15:19.692252    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727097319.899554281
	
	I0923 13:15:19.692252    1580 fix.go:216] guest clock: 1727097319.899554281
	I0923 13:15:19.692252    1580 fix.go:229] Guest: 2024-09-23 13:15:19.899554281 +0000 UTC Remote: 2024-09-23 13:15:15.49619 +0000 UTC m=+306.274816401 (delta=4.403364281s)
	I0923 13:15:19.692252    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:15:21.598393    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:21.599574    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:21.599574    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:23.916255    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:23.916255    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:23.920363    1580 main.go:141] libmachine: Using SSH client type: native
	I0923 13:15:23.920520    1580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.68 22 <nil> <nil>}
	I0923 13:15:23.920520    1580 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727097319
	I0923 13:15:24.067408    1580 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 13:15:19 UTC 2024
	
	I0923 13:15:24.067408    1580 fix.go:236] clock set: Mon Sep 23 13:15:19 UTC 2024
	 (err=<nil>)
	I0923 13:15:24.067408    1580 start.go:83] releasing machines lock for "multinode-560300-m02", held for 1m58.6113689s
	I0923 13:15:24.068050    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:15:25.979393    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:25.980364    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:25.980577    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:28.279448    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:28.279448    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:28.283744    1580 out.go:177] * Found network options:
	I0923 13:15:28.286805    1580 out.go:177]   - NO_PROXY=172.19.153.215
	W0923 13:15:28.289667    1580 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:15:28.291821    1580 out.go:177]   - NO_PROXY=172.19.153.215
	W0923 13:15:28.294436    1580 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:15:28.294436    1580 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:15:28.296923    1580 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 13:15:28.297335    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:15:28.304589    1580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:15:28.305661    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:15:30.288903    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:30.289076    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:30.289076    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:30.289838    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:30.289920    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:30.290056    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:32.688988    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:32.688988    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:32.689316    1580 sshutil.go:53] new ssh client: &{IP:172.19.147.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:15:32.712005    1580 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:15:32.712082    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:32.712420    1580 sshutil.go:53] new ssh client: &{IP:172.19.147.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:15:32.787479    1580 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0923 13:15:32.787601    1580 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4815708s)
	W0923 13:15:32.787626    1580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:15:32.796727    1580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:15:32.801202    1580 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 13:15:32.801202    1580 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5039741s)
	W0923 13:15:32.801202    1580 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 13:15:32.828937    1580 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 13:15:32.828937    1580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:15:32.828937    1580 start.go:495] detecting cgroup driver to use...
	I0923 13:15:32.829202    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:15:32.861979    1580 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 13:15:32.871695    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 13:15:32.897286    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 13:15:32.916017    1580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:15:32.923696    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W0923 13:15:32.935861    1580 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 13:15:32.935861    1580 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 13:15:32.955152    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:15:32.980714    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:15:33.013843    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:15:33.042077    1580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:15:33.068248    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:15:33.097235    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:15:33.123422    1580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:15:33.153904    1580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:15:33.171233    1580 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:15:33.171643    1580 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:15:33.182598    1580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:15:33.215204    1580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:15:33.244876    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:15:33.430631    1580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:15:33.460372    1580 start.go:495] detecting cgroup driver to use...
	I0923 13:15:33.469403    1580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:15:33.491355    1580 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 13:15:33.491355    1580 command_runner.go:130] > [Unit]
	I0923 13:15:33.491355    1580 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 13:15:33.491355    1580 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 13:15:33.491355    1580 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 13:15:33.491355    1580 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 13:15:33.491355    1580 command_runner.go:130] > StartLimitBurst=3
	I0923 13:15:33.491355    1580 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 13:15:33.491355    1580 command_runner.go:130] > [Service]
	I0923 13:15:33.491355    1580 command_runner.go:130] > Type=notify
	I0923 13:15:33.491355    1580 command_runner.go:130] > Restart=on-failure
	I0923 13:15:33.491355    1580 command_runner.go:130] > Environment=NO_PROXY=172.19.153.215
	I0923 13:15:33.491355    1580 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 13:15:33.491355    1580 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 13:15:33.491355    1580 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 13:15:33.491355    1580 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 13:15:33.491355    1580 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 13:15:33.491355    1580 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 13:15:33.491355    1580 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 13:15:33.491355    1580 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 13:15:33.491355    1580 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 13:15:33.491355    1580 command_runner.go:130] > ExecStart=
	I0923 13:15:33.491355    1580 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 13:15:33.491355    1580 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 13:15:33.491355    1580 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 13:15:33.491355    1580 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 13:15:33.491355    1580 command_runner.go:130] > LimitNOFILE=infinity
	I0923 13:15:33.491355    1580 command_runner.go:130] > LimitNPROC=infinity
	I0923 13:15:33.491355    1580 command_runner.go:130] > LimitCORE=infinity
	I0923 13:15:33.491355    1580 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 13:15:33.491355    1580 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 13:15:33.491355    1580 command_runner.go:130] > TasksMax=infinity
	I0923 13:15:33.491355    1580 command_runner.go:130] > TimeoutStartSec=0
	I0923 13:15:33.491355    1580 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 13:15:33.491355    1580 command_runner.go:130] > Delegate=yes
	I0923 13:15:33.491355    1580 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 13:15:33.491355    1580 command_runner.go:130] > KillMode=process
	I0923 13:15:33.491355    1580 command_runner.go:130] > [Install]
	I0923 13:15:33.491355    1580 command_runner.go:130] > WantedBy=multi-user.target
	I0923 13:15:33.500229    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:15:33.525425    1580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:15:33.565507    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:15:33.598997    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:15:33.632776    1580 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:15:33.688238    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:15:33.710324    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:15:33.743256    1580 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 13:15:33.751418    1580 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:15:33.756949    1580 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 13:15:33.764258    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:15:33.780688    1580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:15:33.816320    1580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:15:33.997271    1580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:15:34.170272    1580 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:15:34.170272    1580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:15:34.210389    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:15:34.371211    1580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:15:36.902263    1580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5308812s)
	I0923 13:15:36.913529    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:15:36.945999    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:15:36.982567    1580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:15:37.167704    1580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:15:37.349631    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:15:37.543262    1580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:15:37.584506    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:15:37.615602    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:15:37.790703    1580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:15:37.890128    1580 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:15:37.900929    1580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:15:37.910856    1580 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 13:15:37.910856    1580 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:15:37.910856    1580 command_runner.go:130] > Device: 0,22	Inode: 889         Links: 1
	I0923 13:15:37.910856    1580 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 13:15:37.910856    1580 command_runner.go:130] > Access: 2024-09-23 13:15:38.026356460 +0000
	I0923 13:15:37.910856    1580 command_runner.go:130] > Modify: 2024-09-23 13:15:38.026356460 +0000
	I0923 13:15:37.910856    1580 command_runner.go:130] > Change: 2024-09-23 13:15:38.030356748 +0000
	I0923 13:15:37.910856    1580 command_runner.go:130] >  Birth: -
	I0923 13:15:37.911847    1580 start.go:563] Will wait 60s for crictl version
	I0923 13:15:37.919848    1580 ssh_runner.go:195] Run: which crictl
	I0923 13:15:37.925830    1580 command_runner.go:130] > /usr/bin/crictl
	I0923 13:15:37.938132    1580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:15:37.991999    1580 command_runner.go:130] > Version:  0.1.0
	I0923 13:15:37.991999    1580 command_runner.go:130] > RuntimeName:  docker
	I0923 13:15:37.991999    1580 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 13:15:37.991999    1580 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:15:37.994315    1580 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:15:38.000737    1580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:15:38.031390    1580 command_runner.go:130] > 27.3.0
	I0923 13:15:38.039336    1580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:15:38.066268    1580 command_runner.go:130] > 27.3.0
	I0923 13:15:38.070436    1580 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:15:38.073833    1580 out.go:177]   - env NO_PROXY=172.19.153.215
	I0923 13:15:38.076368    1580 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 13:15:38.079994    1580 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 13:15:38.079994    1580 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 13:15:38.079994    1580 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 13:15:38.079994    1580 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 13:15:38.082937    1580 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 13:15:38.082937    1580 ip.go:214] interface addr: 172.19.144.1/20
	I0923 13:15:38.090559    1580 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 13:15:38.096454    1580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:15:38.117293    1580 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:15:38.117886    1580 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:15:38.118413    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:15:40.023747    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:40.023747    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:40.024155    1580 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:15:40.024792    1580 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300 for IP: 172.19.147.68
	I0923 13:15:40.024792    1580 certs.go:194] generating shared ca certs ...
	I0923 13:15:40.024792    1580 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:15:40.025323    1580 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 13:15:40.025589    1580 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 13:15:40.025853    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:15:40.025853    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:15:40.025853    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:15:40.026458    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:15:40.027032    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 13:15:40.027250    1580 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 13:15:40.027250    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 13:15:40.027775    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 13:15:40.028145    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 13:15:40.028474    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 13:15:40.028758    1580 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 13:15:40.028969    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 13:15:40.029116    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 13:15:40.029226    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:15:40.029313    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:15:40.071864    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 13:15:40.115141    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:15:40.157128    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:15:40.199241    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 13:15:40.243795    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 13:15:40.288836    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:15:40.344369    1580 ssh_runner.go:195] Run: openssl version
	I0923 13:15:40.351946    1580 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:15:40.360294    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:15:40.386598    1580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:15:40.393543    1580 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:15:40.393722    1580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:15:40.405242    1580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:15:40.413375    1580 command_runner.go:130] > b5213941
	I0923 13:15:40.421995    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:15:40.450642    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 13:15:40.480222    1580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 13:15:40.486724    1580 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:15:40.486724    1580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:15:40.498742    1580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 13:15:40.506466    1580 command_runner.go:130] > 51391683
	I0923 13:15:40.514333    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 13:15:40.540320    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 13:15:40.567398    1580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 13:15:40.574290    1580 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:15:40.574357    1580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:15:40.582567    1580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 13:15:40.591435    1580 command_runner.go:130] > 3ec20f2e
	I0923 13:15:40.599270    1580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:15:40.628770    1580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:15:40.635916    1580 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:15:40.635916    1580 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:15:40.635916    1580 kubeadm.go:934] updating node {m02 172.19.147.68 8443 v1.31.1 docker false true} ...
	I0923 13:15:40.635916    1580 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-560300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.147.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:15:40.644709    1580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:15:40.665434    1580 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	I0923 13:15:40.665510    1580 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 13:15:40.673908    1580 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 13:15:40.691930    1580 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 13:15:40.691979    1580 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 13:15:40.691979    1580 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 13:15:40.692130    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 13:15:40.692130    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 13:15:40.703237    1580 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 13:15:40.708952    1580 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 13:15:40.709240    1580 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0923 13:15:40.709240    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 13:15:40.709679    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:15:40.709679    1580 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 13:15:40.769353    1580 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 13:15:40.769353    1580 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0923 13:15:40.769353    1580 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 13:15:40.769740    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 13:15:40.782564    1580 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 13:15:40.838596    1580 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 13:15:40.838596    1580 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0923 13:15:40.838596    1580 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 13:15:41.768801    1580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0923 13:15:41.785326    1580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0923 13:15:41.813484    1580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:15:41.849658    1580 ssh_runner.go:195] Run: grep 172.19.153.215	control-plane.minikube.internal$ /etc/hosts
	I0923 13:15:41.856273    1580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.153.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:15:41.891286    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:15:42.073325    1580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:15:42.108492    1580 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:15:42.109090    1580 start.go:317] joinCluster: &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:15:42.109090    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 13:15:42.109754    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:15:43.984104    1580 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:15:43.984104    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:43.984915    1580 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:15:46.207658    1580 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:15:46.207658    1580 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:15:46.207658    1580 sshutil.go:53] new ssh client: &{IP:172.19.153.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:15:46.381547    1580 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9uj4hk.ohg0udn1ds8miduj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 13:15:46.381714    1580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.271751s)
	I0923 13:15:46.381803    1580 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:15:46.381940    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9uj4hk.ohg0udn1ds8miduj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m02"
	I0923 13:15:46.441263    1580 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:15:46.567352    1580 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0923 13:15:46.567403    1580 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0923 13:15:46.635342    1580 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:15:46.635450    1580 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:15:46.635450    1580 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 13:15:46.831291    1580 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:15:47.832252    1580 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001547782s
	I0923 13:15:47.832252    1580 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0923 13:15:47.864615    1580 command_runner.go:130] > This node has joined the cluster:
	I0923 13:15:47.864688    1580 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0923 13:15:47.864688    1580 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0923 13:15:47.864688    1580 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0923 13:15:47.867080    1580 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:15:47.867367    1580 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9uj4hk.ohg0udn1ds8miduj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m02": (1.4852473s)
	I0923 13:15:47.867462    1580 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 13:15:48.059270    1580 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0923 13:15:48.253883    1580 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-560300-m02 minikube.k8s.io/updated_at=2024_09_23T13_15_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=multinode-560300 minikube.k8s.io/primary=false
	I0923 13:15:48.366528    1580 command_runner.go:130] > node/multinode-560300-m02 labeled
	I0923 13:15:48.366588    1580 start.go:319] duration metric: took 6.2570759s to joinCluster
	I0923 13:15:48.366830    1580 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:15:48.367342    1580 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:15:48.371434    1580 out.go:177] * Verifying Kubernetes components...
	I0923 13:15:48.383172    1580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:15:48.573643    1580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:15:48.599393    1580 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:15:48.600217    1580 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.153.215:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:15:48.601312    1580 node_ready.go:35] waiting up to 6m0s for node "multinode-560300-m02" to be "Ready" ...
	I0923 13:15:48.601623    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:48.601682    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:48.601682    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:48.601682    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:48.615162    1580 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0923 13:15:48.615162    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:48.615162    1580 round_trippers.go:580]     Audit-Id: 9ab5b2e9-63df-4c8c-b52d-79f1a5f1589f
	I0923 13:15:48.615162    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:48.615162    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:48.615162    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:48.615162    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:48.615162    1580 round_trippers.go:580]     Content-Length: 3920
	I0923 13:15:48.615162    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:48 GMT
	I0923 13:15:48.615162    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"592","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2896 chars]
	I0923 13:15:49.102460    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:49.102460    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:49.102460    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:49.102460    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:49.107157    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:15:49.107251    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:49.107251    1580 round_trippers.go:580]     Audit-Id: 013bacad-ea2e-4516-8e1c-dd421bb72c37
	I0923 13:15:49.107251    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:49.107251    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:49.107251    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:49.107251    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:49.107251    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:49.107251    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:49 GMT
	I0923 13:15:49.107529    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:49.601883    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:49.601883    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:49.601883    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:49.601883    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:49.606074    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:15:49.606170    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:49.606170    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:49.606170    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:49 GMT
	I0923 13:15:49.606170    1580 round_trippers.go:580]     Audit-Id: 5a1214d6-986a-4cbf-8929-aa55b0a5cfdf
	I0923 13:15:49.606170    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:49.606170    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:49.606170    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:49.606268    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:49.606536    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:50.102017    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:50.102017    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:50.102017    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:50.102017    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:50.109992    1580 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 13:15:50.110974    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:50.110974    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:50.110974    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:50.110974    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:50 GMT
	I0923 13:15:50.110974    1580 round_trippers.go:580]     Audit-Id: c61f8bfd-1a62-42dd-8fc9-11f038b3e73a
	I0923 13:15:50.110974    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:50.110974    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:50.110974    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:50.110974    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:50.602235    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:50.602235    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:50.602235    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:50.602235    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:50.606630    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:15:50.606630    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:50.606630    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:50 GMT
	I0923 13:15:50.606630    1580 round_trippers.go:580]     Audit-Id: a247b275-f45b-4f9a-accd-fe1780c82437
	I0923 13:15:50.606630    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:50.606630    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:50.606630    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:50.606630    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:50.606630    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:50.606630    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:50.607617    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:15:51.102616    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:51.102713    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:51.102713    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:51.102713    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:51.106116    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:51.106192    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:51.106192    1580 round_trippers.go:580]     Audit-Id: 0772e9ec-de9c-4a10-ada5-f1fd31054487
	I0923 13:15:51.106192    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:51.106192    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:51.106192    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:51.106192    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:51.106192    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:51.106265    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:51 GMT
	I0923 13:15:51.106387    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:51.602603    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:51.602603    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:51.602603    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:51.602603    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:51.606597    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:51.606597    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:51.606597    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:51.606597    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:51.606597    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:51.606597    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:51.606597    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:51 GMT
	I0923 13:15:51.606597    1580 round_trippers.go:580]     Audit-Id: c739b687-bb27-4ff8-94f2-dae0075d0545
	I0923 13:15:51.606597    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:51.606597    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:52.102313    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:52.102313    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:52.102313    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:52.102313    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:52.106650    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:15:52.106650    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:52.106764    1580 round_trippers.go:580]     Audit-Id: d18603eb-0aa5-4523-b46e-f6c8f80bda63
	I0923 13:15:52.106764    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:52.106764    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:52.106764    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:52.106764    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:52.106764    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:52.106764    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:52 GMT
	I0923 13:15:52.106871    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:52.601824    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:52.601824    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:52.602116    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:52.602116    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:52.605596    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:52.605596    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:52.605690    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:52.605690    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:52 GMT
	I0923 13:15:52.605690    1580 round_trippers.go:580]     Audit-Id: 8aa8877f-30cc-4cbe-8d8b-4c8360104a54
	I0923 13:15:52.605690    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:52.605690    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:52.605690    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:52.605690    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:52.605874    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:53.102313    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:53.102313    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:53.102313    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:53.102420    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:53.106144    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:53.106144    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:53.106144    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:53.106144    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:53 GMT
	I0923 13:15:53.106244    1580 round_trippers.go:580]     Audit-Id: 754e3749-7eba-4679-959c-fa88e1516eda
	I0923 13:15:53.106244    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:53.106244    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:53.106244    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:53.106244    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:53.106244    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:53.106865    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:15:53.602283    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:53.602768    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:53.602768    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:53.602768    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:53.606010    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:53.606010    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:53.606010    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:53.606010    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:53.606010    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:53.606010    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:53.606010    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:53 GMT
	I0923 13:15:53.606010    1580 round_trippers.go:580]     Audit-Id: 5a20a508-b7c7-473e-9b63-051408b0318b
	I0923 13:15:53.606010    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:53.606862    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:54.102353    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:54.102353    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:54.102353    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:54.102353    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:54.105611    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:54.105683    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:54.105683    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:54 GMT
	I0923 13:15:54.105683    1580 round_trippers.go:580]     Audit-Id: 0ba38045-f792-4143-a1ba-54d8c0431a69
	I0923 13:15:54.105683    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:54.105683    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:54.105735    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:54.105735    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:54.105735    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:54.105879    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:54.602187    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:54.602187    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:54.602187    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:54.602187    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:54.607608    1580 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:15:54.607659    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:54.607659    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:54.607659    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:54.607704    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:54.607704    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:54 GMT
	I0923 13:15:54.607704    1580 round_trippers.go:580]     Audit-Id: 8febc033-fc99-40a6-bbb1-1bba22af463b
	I0923 13:15:54.607704    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:54.607756    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:54.607949    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:55.101981    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:55.101981    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:55.101981    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:55.101981    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:55.107917    1580 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:15:55.107996    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:55.107996    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:55.108046    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:55.108046    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:55 GMT
	I0923 13:15:55.108046    1580 round_trippers.go:580]     Audit-Id: 6997b4b9-3e73-4c9d-874b-6ae32f6756a0
	I0923 13:15:55.108046    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:55.108046    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:55.108046    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:55.108147    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:55.108527    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:15:55.602545    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:55.602545    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:55.602545    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:55.602545    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:55.608381    1580 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:15:55.608381    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:55.608381    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:55 GMT
	I0923 13:15:55.608381    1580 round_trippers.go:580]     Audit-Id: 4bc6a0ba-2b85-4e95-96ec-5e3a6321bb24
	I0923 13:15:55.608381    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:55.608381    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:55.608381    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:55.608381    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:55.608381    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:55.608381    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:56.101977    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:56.101977    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:56.101977    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:56.101977    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:56.106917    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:15:56.106917    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:56.106917    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:56.106917    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:56.106917    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:56.106917    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:56.106917    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:56 GMT
	I0923 13:15:56.106917    1580 round_trippers.go:580]     Audit-Id: eb2e9352-804e-4762-a136-9f9a205db431
	I0923 13:15:56.106917    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:56.106917    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:56.602722    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:56.602722    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:56.602722    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:56.602722    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:56.606854    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:15:56.606854    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:56.606854    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:56.606854    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:56.606854    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:56.606854    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:56.606854    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:56 GMT
	I0923 13:15:56.606854    1580 round_trippers.go:580]     Audit-Id: 82410912-6c79-4eb3-b7c0-a0904a637bfa
	I0923 13:15:56.606854    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:56.606854    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:57.103133    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:57.103133    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:57.103133    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:57.103133    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:57.107243    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:15:57.107243    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:57.107243    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:57.107243    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:57.107243    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:57.107243    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:57.107243    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:57 GMT
	I0923 13:15:57.107243    1580 round_trippers.go:580]     Audit-Id: 737b3e0a-f5e7-4bb4-abac-bfe922fe9b9c
	I0923 13:15:57.107243    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:57.107550    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:57.602565    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:57.602565    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:57.602565    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:57.602565    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:57.609429    1580 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:15:57.609429    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:57.609429    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:57.609952    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:57.609952    1580 round_trippers.go:580]     Content-Length: 4029
	I0923 13:15:57.609952    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:57 GMT
	I0923 13:15:57.609952    1580 round_trippers.go:580]     Audit-Id: 618aa1df-bfd2-45e3-9650-97e9b2fe2860
	I0923 13:15:57.609952    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:57.609952    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:57.610022    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"596","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3005 chars]
	I0923 13:15:57.610022    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:15:58.104176    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:58.104176    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:58.104176    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:58.104176    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:58.107583    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:58.107583    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:58.107583    1580 round_trippers.go:580]     Audit-Id: 4e1819db-2cef-48ac-8fd8-588ae374dd01
	I0923 13:15:58.107583    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:58.107583    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:58.107583    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:58.107583    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:58.107583    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:58 GMT
	I0923 13:15:58.107583    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:15:58.603071    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:58.603071    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:58.603071    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:58.603403    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:58.606144    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:15:58.606170    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:58.606170    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:58.606170    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:58.606236    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:58.606236    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:58 GMT
	I0923 13:15:58.606236    1580 round_trippers.go:580]     Audit-Id: ffbf1fe4-73c7-4ae6-af2f-23a58daa673a
	I0923 13:15:58.606236    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:58.606374    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:15:59.102719    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:59.102719    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:59.102719    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:59.102719    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:59.105766    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:59.106771    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:59.106771    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:59.106771    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:59.106771    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:59.106771    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:59.106771    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:59 GMT
	I0923 13:15:59.106771    1580 round_trippers.go:580]     Audit-Id: ee6f5e79-f5a9-4ec6-b979-c9e0d5de80f7
	I0923 13:15:59.106771    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:15:59.603671    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:15:59.603671    1580 round_trippers.go:469] Request Headers:
	I0923 13:15:59.603671    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:15:59.603671    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:15:59.606815    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:15:59.606815    1580 round_trippers.go:577] Response Headers:
	I0923 13:15:59.606815    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:15:59.606815    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:15:59.606815    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:15:59 GMT
	I0923 13:15:59.606815    1580 round_trippers.go:580]     Audit-Id: 32525e9d-971a-4c64-affc-2fc7912c6a38
	I0923 13:15:59.606815    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:15:59.606815    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:15:59.606815    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:00.102746    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:00.102746    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:00.102746    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:00.102746    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:00.107094    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:00.107094    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:00.107094    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:00.107165    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:00.107165    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:00.107165    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:00 GMT
	I0923 13:16:00.107165    1580 round_trippers.go:580]     Audit-Id: f12bf5e6-eb0a-41ac-a8a3-b44dc73989ba
	I0923 13:16:00.107165    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:00.107302    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:00.107302    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:16:00.602407    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:00.602407    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:00.602407    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:00.602407    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:00.606751    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:00.606819    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:00.606819    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:00.606819    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:00.606819    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:00 GMT
	I0923 13:16:00.606819    1580 round_trippers.go:580]     Audit-Id: f576bee4-3363-4cf1-be6f-d467a500642c
	I0923 13:16:00.606819    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:00.606819    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:00.606819    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:01.102632    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:01.102632    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:01.102632    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:01.102632    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:01.106683    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:01.106683    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:01.106683    1580 round_trippers.go:580]     Audit-Id: 08732c51-e690-46b7-af98-a518fdbe80fa
	I0923 13:16:01.106683    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:01.106683    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:01.106683    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:01.106683    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:01.106683    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:01 GMT
	I0923 13:16:01.106683    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:01.602726    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:01.602726    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:01.602726    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:01.602726    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:01.606833    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:01.606833    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:01.606833    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:01.606833    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:01.606833    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:01 GMT
	I0923 13:16:01.606833    1580 round_trippers.go:580]     Audit-Id: 1f69b4b6-7732-414e-a503-519d0467b96f
	I0923 13:16:01.606833    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:01.606833    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:01.607037    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:02.102726    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:02.102726    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:02.102726    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:02.102726    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:02.105458    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:02.106369    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:02.106369    1580 round_trippers.go:580]     Audit-Id: 8f637172-af4a-47ef-8675-4f68e1e361b0
	I0923 13:16:02.106369    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:02.106369    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:02.106369    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:02.106369    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:02.106369    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:02 GMT
	I0923 13:16:02.106479    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:02.603482    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:02.603482    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:02.603482    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:02.603482    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:02.608066    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:02.608066    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:02.608122    1580 round_trippers.go:580]     Audit-Id: be0e1cb2-b9aa-413b-a1d6-6ea5deed2619
	I0923 13:16:02.608122    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:02.608122    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:02.608122    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:02.608122    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:02.608122    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:02 GMT
	I0923 13:16:02.608712    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:02.609031    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:16:03.103705    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:03.104191    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:03.104241    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:03.104241    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:03.107584    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:03.107584    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:03.107584    1580 round_trippers.go:580]     Audit-Id: 3798485a-ece2-40bd-9d2d-26500b12f4a4
	I0923 13:16:03.107688    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:03.107688    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:03.107688    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:03.107688    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:03.107688    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:03 GMT
	I0923 13:16:03.107855    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:03.602875    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:03.602875    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:03.602875    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:03.602875    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:03.686750    1580 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0923 13:16:03.686750    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:03.686750    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:03.686750    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:03.686844    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:03 GMT
	I0923 13:16:03.686844    1580 round_trippers.go:580]     Audit-Id: dd531f09-3461-44e1-af89-748d4803dfe7
	I0923 13:16:03.686844    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:03.686844    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:03.686930    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:04.102864    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:04.102864    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:04.102864    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:04.102864    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:04.107117    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:04.107200    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:04.107200    1580 round_trippers.go:580]     Audit-Id: 1c56c352-6228-4d2f-b69c-f4f7a78ab953
	I0923 13:16:04.107200    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:04.107273    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:04.107273    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:04.107273    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:04.107273    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:04 GMT
	I0923 13:16:04.107608    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:04.603357    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:04.603357    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:04.603357    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:04.603357    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:04.612403    1580 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 13:16:04.612485    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:04.612485    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:04 GMT
	I0923 13:16:04.612485    1580 round_trippers.go:580]     Audit-Id: 947f94d4-be57-414e-a29e-30eb38340c1c
	I0923 13:16:04.612485    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:04.612485    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:04.612485    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:04.612485    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:04.613967    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:04.614329    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:16:05.102782    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:05.102782    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:05.102782    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:05.102782    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:05.107973    1580 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:16:05.108078    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:05.108078    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:05.108078    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:05.108078    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:05.108166    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:05 GMT
	I0923 13:16:05.108166    1580 round_trippers.go:580]     Audit-Id: 3cd535de-ad80-43b1-8fa7-2846c96a3eb0
	I0923 13:16:05.108166    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:05.108248    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:05.603709    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:05.603709    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:05.603709    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:05.603709    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:05.607280    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:05.607280    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:05.607280    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:05.607280    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:05.607280    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:05.607738    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:05 GMT
	I0923 13:16:05.607738    1580 round_trippers.go:580]     Audit-Id: 87120317-0e0b-4326-835a-fa7abd1a6399
	I0923 13:16:05.607738    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:05.608071    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:06.103300    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:06.103966    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:06.104063    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:06.104063    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:06.107422    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:06.108094    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:06.108094    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:06.108094    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:06.108094    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:06 GMT
	I0923 13:16:06.108094    1580 round_trippers.go:580]     Audit-Id: d8f95454-a96a-4fe3-82a3-1d51c93b4851
	I0923 13:16:06.108094    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:06.108094    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:06.108322    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:06.603717    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:06.603717    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:06.603717    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:06.603717    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:06.608139    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:06.608207    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:06.608207    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:06.608207    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:06 GMT
	I0923 13:16:06.608207    1580 round_trippers.go:580]     Audit-Id: 59d0280d-42ad-43ee-a757-c42b39f20106
	I0923 13:16:06.608207    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:06.608207    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:06.608207    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:06.608505    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:07.102996    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:07.102996    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:07.102996    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:07.102996    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:07.106979    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:07.107070    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:07.107070    1580 round_trippers.go:580]     Audit-Id: ad45acac-2f90-4aa7-96c1-74fc5908671d
	I0923 13:16:07.107148    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:07.107148    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:07.107148    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:07.107148    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:07.107148    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:07 GMT
	I0923 13:16:07.107457    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:07.108135    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:16:07.603498    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:07.603498    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:07.603498    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:07.603498    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:07.606570    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:07.607712    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:07.607712    1580 round_trippers.go:580]     Audit-Id: 8c8ffea5-a341-4673-a40f-edf51163f3d7
	I0923 13:16:07.607712    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:07.607712    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:07.607712    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:07.607712    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:07.607712    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:07 GMT
	I0923 13:16:07.607848    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:08.103504    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:08.103504    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:08.103504    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:08.103504    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:08.106978    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:08.106978    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:08.106978    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:08.106978    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:08.107627    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:08.107627    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:08 GMT
	I0923 13:16:08.107627    1580 round_trippers.go:580]     Audit-Id: 1e7a3353-e0e5-4e1f-b68f-2c9b47e12842
	I0923 13:16:08.107696    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:08.108110    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:08.603026    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:08.603026    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:08.603026    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:08.603026    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:08.607239    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:08.607355    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:08.607355    1580 round_trippers.go:580]     Audit-Id: 792a05f1-9b2a-455c-a2e7-432dbc7dc475
	I0923 13:16:08.607355    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:08.607355    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:08.607355    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:08.607355    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:08.607355    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:08 GMT
	I0923 13:16:08.607703    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:09.103501    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:09.103501    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:09.103501    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:09.103501    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:09.107225    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:09.107225    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:09.107225    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:09 GMT
	I0923 13:16:09.107225    1580 round_trippers.go:580]     Audit-Id: 485ecd3d-d637-4720-9c75-b261f49d3d09
	I0923 13:16:09.107225    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:09.107225    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:09.107225    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:09.107225    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:09.107225    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:09.603829    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:09.603829    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:09.603829    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:09.603829    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:09.608378    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:09.608474    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:09.608474    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:09.608474    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:09.608474    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:09 GMT
	I0923 13:16:09.608474    1580 round_trippers.go:580]     Audit-Id: bf8d3dea-5ad8-4bea-890b-2fc4e6ea0dea
	I0923 13:16:09.608474    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:09.608474    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:09.608629    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:09.608796    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:16:10.103005    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:10.103005    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:10.103005    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:10.103005    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:10.106646    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:10.106777    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:10.106777    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:10.106777    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:10.106777    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:10 GMT
	I0923 13:16:10.106777    1580 round_trippers.go:580]     Audit-Id: 8ed2a273-4c5c-49f3-bca0-8ef57c1f8534
	I0923 13:16:10.106777    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:10.106907    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:10.107167    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:10.603033    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:10.603033    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:10.603033    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:10.603033    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:10.607162    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:10.607162    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:10.607162    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:10.607162    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:10.607162    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:10.607162    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:10.607162    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:10 GMT
	I0923 13:16:10.607162    1580 round_trippers.go:580]     Audit-Id: 71571776-e35c-4c2b-b196-176ff521868a
	I0923 13:16:10.607524    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:11.103859    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:11.103859    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:11.103859    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:11.103859    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:11.108101    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:11.108211    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:11.108211    1580 round_trippers.go:580]     Audit-Id: 5afa2d39-db27-49a1-aa45-d264f2ff23bd
	I0923 13:16:11.108211    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:11.108211    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:11.108211    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:11.108211    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:11.108211    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:11 GMT
	I0923 13:16:11.108605    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:11.604090    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:11.604090    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:11.604090    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:11.604090    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:11.608134    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:11.608981    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:11.608981    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:11.608981    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:11.608981    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:11.608981    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:11.608981    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:11 GMT
	I0923 13:16:11.608981    1580 round_trippers.go:580]     Audit-Id: be362179-c8fa-428b-b9bf-144f2fa6df6a
	I0923 13:16:11.609263    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:11.609382    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:16:12.103755    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:12.103755    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:12.104268    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:12.104268    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:12.108121    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:12.108199    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:12.108199    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:12.108199    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:12.108199    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:12.108199    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:12 GMT
	I0923 13:16:12.108199    1580 round_trippers.go:580]     Audit-Id: ff221b1b-bb06-4409-8a34-da90dfa151ba
	I0923 13:16:12.108199    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:12.108470    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:12.603879    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:12.604527    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:12.604618    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:12.604618    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:12.607905    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:12.607905    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:12.607905    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:12.607905    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:12.607905    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:12.607905    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:12.608083    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:12 GMT
	I0923 13:16:12.608083    1580 round_trippers.go:580]     Audit-Id: 2473f0ec-1a26-4af8-94e3-70293b23259c
	I0923 13:16:12.608441    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:13.104318    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:13.104439    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:13.104439    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:13.104439    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:13.107933    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:13.108007    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:13.108007    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:13.108007    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:13.108007    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:13.108007    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:13 GMT
	I0923 13:16:13.108007    1580 round_trippers.go:580]     Audit-Id: a5bcd8f8-7be6-4db2-a3f6-9c2dac033fa9
	I0923 13:16:13.108081    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:13.108110    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:13.604384    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:13.604514    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:13.604514    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:13.604638    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:13.607929    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:13.607929    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:13.607929    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:13.608045    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:13.608045    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:13 GMT
	I0923 13:16:13.608045    1580 round_trippers.go:580]     Audit-Id: eb4a8273-8852-481c-9d26-9650aa4fbcc8
	I0923 13:16:13.608045    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:13.608045    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:13.608236    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:14.103695    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:14.103695    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:14.103695    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:14.103695    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:14.109566    1580 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:16:14.109566    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:14.109566    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:14 GMT
	I0923 13:16:14.109566    1580 round_trippers.go:580]     Audit-Id: ec80884c-4acb-4528-9135-1da6baed7f79
	I0923 13:16:14.109566    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:14.109566    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:14.109566    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:14.109566    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:14.110251    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:14.110251    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:16:14.603541    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:14.604184    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:14.604184    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:14.604184    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:14.607438    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:14.607509    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:14.607509    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:14.607509    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:14.607509    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:14.607509    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:14.607509    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:14 GMT
	I0923 13:16:14.607509    1580 round_trippers.go:580]     Audit-Id: 5ce74a12-6a62-4ea6-80a5-c9feae3e1266
	I0923 13:16:14.607641    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:15.103298    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:15.103298    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:15.103298    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:15.103298    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:15.106240    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:15.107195    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:15.107195    1580 round_trippers.go:580]     Audit-Id: bf9e1299-530f-4b65-8015-06e87928b192
	I0923 13:16:15.107269    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:15.107269    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:15.107269    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:15.107269    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:15.107269    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:15 GMT
	I0923 13:16:15.107480    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:15.604972    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:15.605071    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:15.605071    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:15.605071    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:15.608032    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:15.608782    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:15.608782    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:15.608782    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:15.608782    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:15.608782    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:15.608782    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:15 GMT
	I0923 13:16:15.608782    1580 round_trippers.go:580]     Audit-Id: 0369805d-2bd9-48d3-a031-48774dafb868
	I0923 13:16:15.609050    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:16.103899    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:16.103899    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:16.103899    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:16.103899    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:16.110726    1580 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:16:16.110726    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:16.110726    1580 round_trippers.go:580]     Audit-Id: b458183d-20b0-4690-814d-026ae2461001
	I0923 13:16:16.110726    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:16.110726    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:16.110726    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:16.111258    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:16.111258    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:16 GMT
	I0923 13:16:16.111361    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:16.111718    1580 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:16:16.604521    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:16.604521    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:16.604521    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:16.604521    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:16.608029    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:16.608029    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:16.608029    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:16.608029    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:16.608029    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:16.608029    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:16.608029    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:16 GMT
	I0923 13:16:16.608029    1580 round_trippers.go:580]     Audit-Id: 62c3ec28-41c8-403d-a2ba-0f48ded422e9
	I0923 13:16:16.608029    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:17.104063    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:17.104063    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:17.104063    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:17.104063    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:17.108508    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:17.108612    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:17.108671    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:17.108671    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:17.108671    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:17.108671    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:17 GMT
	I0923 13:16:17.108671    1580 round_trippers.go:580]     Audit-Id: 1fd09162-ca97-44eb-b117-54e669040b14
	I0923 13:16:17.108671    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:17.108963    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:17.604421    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:17.604421    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:17.604421    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:17.604421    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:17.608206    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:17.608206    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:17.608303    1580 round_trippers.go:580]     Audit-Id: 5aef2a72-418d-4753-98f7-54641aa1c408
	I0923 13:16:17.608303    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:17.608303    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:17.608303    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:17.608303    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:17.608303    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:17 GMT
	I0923 13:16:17.608303    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"608","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3397 chars]
	I0923 13:16:18.103551    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:18.103551    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.103551    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.103551    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.107929    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:18.107929    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.107929    1580 round_trippers.go:580]     Audit-Id: b229148c-6c52-47fb-8adb-cad789feb5e6
	I0923 13:16:18.107929    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.107929    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.107929    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.107929    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.107929    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.108369    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"639","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3777 chars]
	I0923 13:16:18.109004    1580 node_ready.go:49] node "multinode-560300-m02" has status "Ready":"True"
	I0923 13:16:18.109004    1580 node_ready.go:38] duration metric: took 29.5056179s for node "multinode-560300-m02" to be "Ready" ...
	I0923 13:16:18.109084    1580 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:16:18.109282    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods
	I0923 13:16:18.109282    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.109282    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.109446    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.116073    1580 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:16:18.116073    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.116073    1580 round_trippers.go:580]     Audit-Id: 929f0df8-64e8-4b26-b8ad-94a787b04ca9
	I0923 13:16:18.116073    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.116073    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.116073    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.116073    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.116073    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.117782    1580 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"639"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"444","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 72677 chars]
	I0923 13:16:18.120953    1580 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.121001    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:16:18.121001    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.121001    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.121001    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.123490    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:18.123490    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.124126    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.124126    1580 round_trippers.go:580]     Audit-Id: 17588612-e8a6-4cb5-a6b9-ca17d7599735
	I0923 13:16:18.124126    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.124126    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.124126    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.124126    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.124275    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"444","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I0923 13:16:18.124824    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:16:18.124877    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.124877    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.124877    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.126475    1580 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 13:16:18.126475    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.127484    1580 round_trippers.go:580]     Audit-Id: 7d8e89ba-a201-4165-b857-ff29b66faef7
	I0923 13:16:18.127484    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.127484    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.127484    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.127484    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.127484    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.127484    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"451","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0923 13:16:18.127484    1580 pod_ready.go:93] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"True"
	I0923 13:16:18.127484    1580 pod_ready.go:82] duration metric: took 6.4826ms for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.127484    1580 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.127484    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:16:18.127484    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.127484    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.127484    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.130481    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:18.130481    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.130481    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.130481    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.130546    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.130546    1580 round_trippers.go:580]     Audit-Id: 16f39d3c-a59b-475f-a659-c749582e174d
	I0923 13:16:18.130546    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.130546    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.131077    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"67f0bcb0-9d38-4450-9001-134a810ba113","resourceVersion":"368","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.153.215:2379","kubernetes.io/config.hash":"8712c4ce8da12187fec77f2ae7f14852","kubernetes.io/config.mirror":"8712c4ce8da12187fec77f2ae7f14852","kubernetes.io/config.seen":"2024-09-23T13:12:54.655467491Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6476 chars]
	I0923 13:16:18.131566    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:16:18.131617    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.131617    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.131617    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.133710    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:18.133769    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.133769    1580 round_trippers.go:580]     Audit-Id: 0b4f8b33-5d8a-489c-a406-ee6c5b2e76a0
	I0923 13:16:18.133769    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.133769    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.133824    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.133824    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.133824    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.134434    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"451","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0923 13:16:18.134785    1580 pod_ready.go:93] pod "etcd-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:16:18.134844    1580 pod_ready.go:82] duration metric: took 7.3007ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.134844    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.134908    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:16:18.134908    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.134980    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.134980    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.137335    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:18.137484    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.137484    1580 round_trippers.go:580]     Audit-Id: fa8b34d3-0b8d-402c-ae8e-11cf0dc84f1d
	I0923 13:16:18.137484    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.137484    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.137484    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.137484    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.137484    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.137730    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"5a468385-fdb9-4c85-b241-6cee87e52d9c","resourceVersion":"406","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.153.215:8443","kubernetes.io/config.hash":"013b4f74438b81d3e778f9e09be4f2f0","kubernetes.io/config.mirror":"013b4f74438b81d3e778f9e09be4f2f0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655472192Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0923 13:16:18.138234    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:16:18.138234    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.138234    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.138296    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.140943    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:18.140943    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.140943    1580 round_trippers.go:580]     Audit-Id: bc233ee2-7a91-400d-83c7-acb2eb140b16
	I0923 13:16:18.140943    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.140943    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.140943    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.140943    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.140943    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.140943    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"451","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0923 13:16:18.140943    1580 pod_ready.go:93] pod "kube-apiserver-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:16:18.140943    1580 pod_ready.go:82] duration metric: took 6.0987ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.140943    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.140943    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:16:18.140943    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.141893    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.141893    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.144583    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:18.144583    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.144583    1580 round_trippers.go:580]     Audit-Id: d2b3444e-82b2-4e42-af97-a4a379d3b6c0
	I0923 13:16:18.145088    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.145088    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.145088    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.145088    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.145088    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.145288    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"365","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0923 13:16:18.145453    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:16:18.145453    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.145453    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.145453    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.147673    1580 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:16:18.147673    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.147673    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.147673    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.147673    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.147673    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.147673    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.147673    1580 round_trippers.go:580]     Audit-Id: 42584441-ee48-4ded-92fe-133a6f82f1cb
	I0923 13:16:18.148148    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"451","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0923 13:16:18.148497    1580 pod_ready.go:93] pod "kube-controller-manager-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:16:18.148497    1580 pod_ready.go:82] duration metric: took 7.5533ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.148497    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.304468    1580 request.go:632] Waited for 155.7856ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:16:18.304468    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:16:18.304468    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.304468    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.305171    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.308699    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:18.308782    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.308844    1580 round_trippers.go:580]     Audit-Id: cc3a6e7e-0d81-4fcd-a035-2034e7dcc785
	I0923 13:16:18.308844    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.308844    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.308844    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.308844    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.308936    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.309357    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"615","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6202 chars]
	I0923 13:16:18.504505    1580 request.go:632] Waited for 194.3029ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:18.504505    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:16:18.504505    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.504505    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.504505    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.507817    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:18.507908    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.507908    1580 round_trippers.go:580]     Audit-Id: 2d06d8b1-ea41-4640-9c90-1b418588e3cf
	I0923 13:16:18.507908    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.508020    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.508020    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.508020    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.508020    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.508135    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"639","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3777 chars]
	I0923 13:16:18.508897    1580 pod_ready.go:93] pod "kube-proxy-g5t97" in "kube-system" namespace has status "Ready":"True"
	I0923 13:16:18.508897    1580 pod_ready.go:82] duration metric: took 360.3755ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.508996    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.703981    1580 request.go:632] Waited for 194.9187ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:16:18.703981    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:16:18.703981    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.703981    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.703981    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.707184    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:18.708095    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.708095    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.708095    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:18 GMT
	I0923 13:16:18.708095    1580 round_trippers.go:580]     Audit-Id: c29c3238-4d42-44f3-a87f-1e710b1615cc
	I0923 13:16:18.708095    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.708095    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.708095    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.708291    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"401","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6199 chars]
	I0923 13:16:18.903808    1580 request.go:632] Waited for 194.929ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:16:18.904122    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:16:18.904122    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:18.904122    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:18.904122    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:18.907581    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:18.907581    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:18.907581    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:18.907581    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:18.907581    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:19 GMT
	I0923 13:16:18.907581    1580 round_trippers.go:580]     Audit-Id: 73e6a9ca-9bae-4077-85c1-d6dea94b8e3a
	I0923 13:16:18.907581    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:18.907581    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:18.907581    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"451","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0923 13:16:18.908495    1580 pod_ready.go:93] pod "kube-proxy-rgmcw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:16:18.908495    1580 pod_ready.go:82] duration metric: took 399.4719ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:18.908575    1580 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:19.104321    1580 request.go:632] Waited for 195.6675ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:16:19.104321    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:16:19.104321    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:19.104321    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:19.104321    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:19.109005    1580 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:16:19.109175    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:19.109175    1580 round_trippers.go:580]     Audit-Id: 692c7d28-9276-4f26-9d33-48d9747676ca
	I0923 13:16:19.109175    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:19.109175    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:19.109175    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:19.109175    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:19.109175    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:19 GMT
	I0923 13:16:19.109175    1580 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"405","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0923 13:16:19.304146    1580 request.go:632] Waited for 193.9659ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:16:19.304618    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes/multinode-560300
	I0923 13:16:19.304618    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:19.304618    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:19.304618    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:19.308094    1580 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:16:19.308094    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:19.308094    1580 round_trippers.go:580]     Audit-Id: 31389ae5-80b0-490e-96b1-b4153cf650df
	I0923 13:16:19.308094    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:19.308094    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:19.308094    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:19.308094    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:19.308094    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:19 GMT
	I0923 13:16:19.308094    1580 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"451","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0923 13:16:19.308783    1580 pod_ready.go:93] pod "kube-scheduler-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:16:19.308783    1580 pod_ready.go:82] duration metric: took 400.1806ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:16:19.308783    1580 pod_ready.go:39] duration metric: took 1.1996179s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:16:19.308783    1580 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:16:19.317795    1580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:16:19.346608    1580 system_svc.go:56] duration metric: took 37.823ms WaitForService to wait for kubelet
	I0923 13:16:19.346608    1580 kubeadm.go:582] duration metric: took 30.9776873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:16:19.346608    1580 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:16:19.504268    1580 request.go:632] Waited for 157.6497ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.153.215:8443/api/v1/nodes
	I0923 13:16:19.504268    1580 round_trippers.go:463] GET https://172.19.153.215:8443/api/v1/nodes
	I0923 13:16:19.504268    1580 round_trippers.go:469] Request Headers:
	I0923 13:16:19.504268    1580 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:16:19.504268    1580 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:16:19.510677    1580 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:16:19.510677    1580 round_trippers.go:577] Response Headers:
	I0923 13:16:19.510677    1580 round_trippers.go:580]     Audit-Id: dcfe6d55-6374-4bbd-9606-25e3ad7e81a6
	I0923 13:16:19.510781    1580 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:16:19.510781    1580 round_trippers.go:580]     Content-Type: application/json
	I0923 13:16:19.510781    1580 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:16:19.510781    1580 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:16:19.510781    1580 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:16:19 GMT
	I0923 13:16:19.511221    1580 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"643"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"451","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9661 chars]
	I0923 13:16:19.512171    1580 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:16:19.512256    1580 node_conditions.go:123] node cpu capacity is 2
	I0923 13:16:19.512256    1580 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:16:19.512344    1580 node_conditions.go:123] node cpu capacity is 2
	I0923 13:16:19.512344    1580 node_conditions.go:105] duration metric: took 165.7249ms to run NodePressure ...
	I0923 13:16:19.512344    1580 start.go:241] waiting for startup goroutines ...
	I0923 13:16:19.512465    1580 start.go:255] writing updated cluster config ...
	I0923 13:16:19.523292    1580 ssh_runner.go:195] Run: rm -f paused
	I0923 13:16:19.641461    1580 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:16:19.644382    1580 out.go:177] * Done! kubectl is now configured to use "multinode-560300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 13:13:22 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:22.755856861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:13:22 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:22.774206011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:13:22 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:22.774328019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:13:22 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:22.774397423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:13:22 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:22.776872978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:13:22 multinode-560300 cri-dockerd[1325]: time="2024-09-23T13:13:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/544604cdd801700da010dbe0f8891d8a0475a9ab19a6595fc16f00b0d720e931/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 13:13:22 multinode-560300 cri-dockerd[1325]: time="2024-09-23T13:13:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb12eb8fe1eab6bc65e5f4fd56be7516f7adc5ad5436b16ef67fa13648765407/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 13:13:23 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:23.107057702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:13:23 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:23.107205311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:13:23 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:23.107235913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:13:23 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:23.107342319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:13:23 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:23.201062140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:13:23 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:23.202306716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:13:23 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:23.202326918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:13:23 multinode-560300 dockerd[1433]: time="2024-09-23T13:13:23.202518429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:16:41 multinode-560300 dockerd[1433]: time="2024-09-23T13:16:41.679650926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:16:41 multinode-560300 dockerd[1433]: time="2024-09-23T13:16:41.681270028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:16:41 multinode-560300 dockerd[1433]: time="2024-09-23T13:16:41.681656552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:16:41 multinode-560300 dockerd[1433]: time="2024-09-23T13:16:41.682035576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:16:41 multinode-560300 cri-dockerd[1325]: time="2024-09-23T13:16:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f294b19f20ba1f83903c16a95a0b0577ff48d938be75dcc6def5d592c13312f9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 13:16:43 multinode-560300 cri-dockerd[1325]: time="2024-09-23T13:16:43Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Sep 23 13:16:43 multinode-560300 dockerd[1433]: time="2024-09-23T13:16:43.644929897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:16:43 multinode-560300 dockerd[1433]: time="2024-09-23T13:16:43.645133612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:16:43 multinode-560300 dockerd[1433]: time="2024-09-23T13:16:43.645159913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:16:43 multinode-560300 dockerd[1433]: time="2024-09-23T13:16:43.645294523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	78de2657becad       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   45 seconds ago      Running             busybox                   0                   f294b19f20ba1       busybox-7dff88458-wwgwh
	648460d0f31f3       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   0                   eb12eb8fe1eab       coredns-7c65d6cfc9-glx94
	b07ca58581540       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   544604cdd8017       storage-provisioner
	a83589d1098af       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              4 minutes ago       Running             kindnet-cni               0                   0f322d00a55b9       kindnet-mdnmc
	c92a84f5caf22       60c005f310ff3                                                                                         4 minutes ago       Running             kube-proxy                0                   cf2fc1e617749       kube-proxy-rgmcw
	90116ded443d0       2e96e5913fc06                                                                                         4 minutes ago       Running             etcd                      0                   7c23acc78f4c2       etcd-multinode-560300
	117d706d07d2f       9aa1fad941575                                                                                         4 minutes ago       Running             kube-scheduler            0                   b160f7a7a5d22       kube-scheduler-multinode-560300
	03ce0954301e2       175ffd71cce3d                                                                                         4 minutes ago       Running             kube-controller-manager   0                   67b7e79ad6b59       kube-controller-manager-multinode-560300
	8ab41eeaea91b       6bab7719df100                                                                                         4 minutes ago       Running             kube-apiserver            0                   6ef47416b046a       kube-apiserver-multinode-560300
	
	
	==> coredns [648460d0f31f] <==
	[INFO] 10.244.1.2:59359 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098007s
	[INFO] 10.244.0.3:46800 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213015s
	[INFO] 10.244.0.3:38681 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000058704s
	[INFO] 10.244.0.3:52711 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127209s
	[INFO] 10.244.0.3:54030 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000224916s
	[INFO] 10.244.0.3:55333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000045404s
	[INFO] 10.244.0.3:49850 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079305s
	[INFO] 10.244.0.3:54603 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043103s
	[INFO] 10.244.0.3:56551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014271s
	[INFO] 10.244.1.2:45863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113008s
	[INFO] 10.244.1.2:36717 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085106s
	[INFO] 10.244.1.2:43150 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082206s
	[INFO] 10.244.1.2:34236 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197714s
	[INFO] 10.244.0.3:37601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112108s
	[INFO] 10.244.0.3:60698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178513s
	[INFO] 10.244.0.3:35977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068705s
	[INFO] 10.244.0.3:54979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114608s
	[INFO] 10.244.1.2:58051 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107208s
	[INFO] 10.244.1.2:36408 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226517s
	[INFO] 10.244.1.2:33973 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210915s
	[INFO] 10.244.1.2:45767 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104007s
	[INFO] 10.244.0.3:36090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125109s
	[INFO] 10.244.0.3:46993 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240317s
	[INFO] 10.244.0.3:40120 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087606s
	[INFO] 10.244.0.3:46564 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080205s
	
	
	==> describe nodes <==
	Name:               multinode-560300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-560300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-560300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_12_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-560300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:17:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:17:00 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:17:00 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:17:00 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:17:00 +0000   Mon, 23 Sep 2024 13:13:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.153.215
	  Hostname:    multinode-560300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bb53feeca4446a48f4c6c505ec6bcdd
	  System UUID:                d1328c2e-dfd4-f844-981c-cc7a85ce582e
	  Boot ID:                    cf609af8-5048-4da2-a700-e7aa190c09c8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wwgwh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 coredns-7c65d6cfc9-glx94                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m28s
	  kube-system                 etcd-multinode-560300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m34s
	  kube-system                 kindnet-mdnmc                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m29s
	  kube-system                 kube-apiserver-multinode-560300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-controller-manager-multinode-560300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-proxy-rgmcw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-multinode-560300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m41s (x8 over 4m41s)  kubelet          Node multinode-560300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s (x8 over 4m41s)  kubelet          Node multinode-560300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s (x7 over 4m41s)  kubelet          Node multinode-560300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m34s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m34s                  kubelet          Node multinode-560300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m34s                  kubelet          Node multinode-560300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m34s                  kubelet          Node multinode-560300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m30s                  node-controller  Node multinode-560300 event: Registered Node multinode-560300 in Controller
	  Normal  NodeReady                4m6s                   kubelet          Node multinode-560300 status is now: NodeReady
	
	
	Name:               multinode-560300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-560300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-560300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_15_48_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:15:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-560300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:17:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:16:49 +0000   Mon, 23 Sep 2024 13:15:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:16:49 +0000   Mon, 23 Sep 2024 13:15:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:16:49 +0000   Mon, 23 Sep 2024 13:15:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:16:49 +0000   Mon, 23 Sep 2024 13:16:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.147.68
	  Hostname:    multinode-560300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b45257e717654013a73b9382fef17a04
	  System UUID:                05b2789d-962f-ff45-a09c-66a2273cfcfc
	  Boot ID:                    20e6a726-4d12-438a-8f66-481ae83f34bf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h4tgf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kindnet-qg99z              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      101s
	  kube-system                 kube-proxy-g5t97           0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x2 over 101s)  kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x2 over 101s)  kubelet          Node multinode-560300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x2 over 101s)  kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node multinode-560300-m02 event: Registered Node multinode-560300-m02 in Controller
	  Normal  NodeReady                70s                  kubelet          Node multinode-560300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.361555] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +40.402423] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.147924] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[Sep23 13:12] systemd-fstab-generator[999]: Ignoring "noauto" option for root device
	[  +0.092867] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.468232] systemd-fstab-generator[1038]: Ignoring "noauto" option for root device
	[  +0.181203] systemd-fstab-generator[1050]: Ignoring "noauto" option for root device
	[  +0.224760] systemd-fstab-generator[1064]: Ignoring "noauto" option for root device
	[  +2.767640] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.169033] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.173764] systemd-fstab-generator[1302]: Ignoring "noauto" option for root device
	[  +0.247104] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[ +10.404780] systemd-fstab-generator[1419]: Ignoring "noauto" option for root device
	[  +0.091610] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.639506] systemd-fstab-generator[1671]: Ignoring "noauto" option for root device
	[  +5.810135] systemd-fstab-generator[1816]: Ignoring "noauto" option for root device
	[  +0.086252] kauditd_printk_skb: 70 callbacks suppressed
	[  +7.567627] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.117895] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.849742] systemd-fstab-generator[2327]: Ignoring "noauto" option for root device
	[  +0.156823] kauditd_printk_skb: 12 callbacks suppressed
	[Sep23 13:13] kauditd_printk_skb: 51 callbacks suppressed
	[Sep23 13:16] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [90116ded443d] <==
	{"level":"info","ts":"2024-09-23T13:12:49.923616Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:12:49.925028Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:12:49.925275Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:12:49.928523Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:12:49.929520Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T13:12:49.937555Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.153.215:2379"}
	{"level":"info","ts":"2024-09-23T13:12:49.948418Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:13:06.128232Z","caller":"traceutil/trace.go:171","msg":"trace[52092060] linearizableReadLoop","detail":"{readStateIndex:419; appliedIndex:418; }","duration":"264.604563ms","start":"2024-09-23T13:13:05.863612Z","end":"2024-09-23T13:13:06.128216Z","steps":["trace[52092060] 'read index received'  (duration: 264.454653ms)","trace[52092060] 'applied index is now lower than readState.Index'  (duration: 149.31µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T13:13:06.128517Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"220.662547ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-560300\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2024-09-23T13:13:06.128588Z","caller":"traceutil/trace.go:171","msg":"trace[760176175] range","detail":"{range_begin:/registry/minions/multinode-560300; range_end:; response_count:1; response_revision:405; }","duration":"220.746352ms","start":"2024-09-23T13:13:05.907832Z","end":"2024-09-23T13:13:06.128579Z","steps":["trace[760176175] 'agreement among raft nodes before linearized reading'  (duration: 220.635445ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:13:06.128521Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.89368ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T13:13:06.128702Z","caller":"traceutil/trace.go:171","msg":"trace[989395181] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:405; }","duration":"265.070491ms","start":"2024-09-23T13:13:05.863607Z","end":"2024-09-23T13:13:06.128678Z","steps":["trace[989395181] 'agreement among raft nodes before linearized reading'  (duration: 264.869879ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:13:06.128840Z","caller":"traceutil/trace.go:171","msg":"trace[374998832] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"331.347287ms","start":"2024-09-23T13:13:05.797485Z","end":"2024-09-23T13:13:06.128832Z","steps":["trace[374998832] 'process raft request'  (duration: 330.626242ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:13:06.130211Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:13:05.797467Z","time spent":"331.494296ms","remote":"127.0.0.1:47452","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4322,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-multinode-560300\" mod_revision:303 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-multinode-560300\" value_size:4256 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-multinode-560300\" > >"}
	{"level":"info","ts":"2024-09-23T13:13:07.574854Z","caller":"traceutil/trace.go:171","msg":"trace[27508928] linearizableReadLoop","detail":"{readStateIndex:425; appliedIndex:424; }","duration":"166.696328ms","start":"2024-09-23T13:13:07.408139Z","end":"2024-09-23T13:13:07.574836Z","steps":["trace[27508928] 'read index received'  (duration: 166.474015ms)","trace[27508928] 'applied index is now lower than readState.Index'  (duration: 221.513µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T13:13:07.575092Z","caller":"traceutil/trace.go:171","msg":"trace[1855103153] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"200.421918ms","start":"2024-09-23T13:13:07.374654Z","end":"2024-09-23T13:13:07.575076Z","steps":["trace[1855103153] 'process raft request'  (duration: 199.932888ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:13:07.575101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.024548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-560300\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2024-09-23T13:13:07.575142Z","caller":"traceutil/trace.go:171","msg":"trace[1581729482] range","detail":"{range_begin:/registry/minions/multinode-560300; range_end:; response_count:1; response_revision:411; }","duration":"167.077652ms","start":"2024-09-23T13:13:07.408054Z","end":"2024-09-23T13:13:07.575132Z","steps":["trace[1581729482] 'agreement among raft nodes before linearized reading'  (duration: 166.961144ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:13:37.479080Z","caller":"traceutil/trace.go:171","msg":"trace[606684669] transaction","detail":"{read_only:false; response_revision:460; number_of_response:1; }","duration":"118.063621ms","start":"2024-09-23T13:13:37.360996Z","end":"2024-09-23T13:13:37.479060Z","steps":["trace[606684669] 'process raft request'  (duration: 116.961153ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:15:42.782624Z","caller":"traceutil/trace.go:171","msg":"trace[1820247970] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"326.481319ms","start":"2024-09-23T13:15:42.456127Z","end":"2024-09-23T13:15:42.782609Z","steps":["trace[1820247970] 'process raft request'  (duration: 326.371812ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:15:42.782900Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T13:15:42.456107Z","time spent":"326.713834ms","remote":"127.0.0.1:47436","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:557 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-23T13:15:58.286059Z","caller":"traceutil/trace.go:171","msg":"trace[1693771128] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"172.222529ms","start":"2024-09-23T13:15:58.113821Z","end":"2024-09-23T13:15:58.286043Z","steps":["trace[1693771128] 'process raft request'  (duration: 172.027517ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T13:15:58.297155Z","caller":"traceutil/trace.go:171","msg":"trace[137665232] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"137.533248ms","start":"2024-09-23T13:15:58.159608Z","end":"2024-09-23T13:15:58.297141Z","steps":["trace[137665232] 'process raft request'  (duration: 137.25593ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T13:16:03.693661Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.075001ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.19.153.215\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-09-23T13:16:03.693738Z","caller":"traceutil/trace.go:171","msg":"trace[1456286988] range","detail":"{range_begin:/registry/masterleases/172.19.153.215; range_end:; response_count:1; response_revision:619; }","duration":"151.161706ms","start":"2024-09-23T13:16:03.542563Z","end":"2024-09-23T13:16:03.693724Z","steps":["trace[1456286988] 'range keys from in-memory index tree'  (duration: 150.961594ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:17:29 up 6 min,  0 users,  load average: 0.34, 0.30, 0.15
	Linux multinode-560300 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a83589d1098a] <==
	I0923 13:16:28.967996       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:16:38.970592       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:16:38.970716       1 main.go:299] handling current node
	I0923 13:16:38.970737       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:16:38.971139       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:16:48.972714       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:16:48.973273       1 main.go:299] handling current node
	I0923 13:16:48.973304       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:16:48.973344       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:16:58.970295       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:16:58.970481       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:16:58.970961       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:16:58.971046       1 main.go:299] handling current node
	I0923 13:17:08.963541       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:17:08.963635       1 main.go:299] handling current node
	I0923 13:17:08.963651       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:17:08.963657       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:17:18.971436       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:17:18.971548       1 main.go:299] handling current node
	I0923 13:17:18.971568       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:17:18.971575       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:17:28.969749       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:17:28.969872       1 main.go:299] handling current node
	I0923 13:17:28.969962       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:17:28.969990       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8ab41eeaea91] <==
	I0923 13:12:52.240265       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0923 13:12:52.248654       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0923 13:12:52.248669       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 13:12:53.285856       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 13:12:53.370647       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 13:12:53.542326       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0923 13:12:53.555545       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.153.215]
	I0923 13:12:53.556574       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:12:53.567653       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 13:12:54.297566       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 13:12:54.735181       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 13:12:54.837702       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0923 13:12:54.863885       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 13:12:59.638339       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0923 13:12:59.946115       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0923 13:16:47.235923       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57546: use of closed network connection
	E0923 13:16:47.677596       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57548: use of closed network connection
	E0923 13:16:48.215330       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57550: use of closed network connection
	E0923 13:16:48.669405       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57552: use of closed network connection
	E0923 13:16:49.097978       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57554: use of closed network connection
	E0923 13:16:49.535168       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57556: use of closed network connection
	E0923 13:16:50.304513       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57559: use of closed network connection
	E0923 13:17:00.725981       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57561: use of closed network connection
	E0923 13:17:01.122649       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57564: use of closed network connection
	E0923 13:17:11.569682       1 conn.go:339] Error on socket receive: read tcp 172.19.153.215:8443->172.19.144.1:57566: use of closed network connection
	
	
	==> kube-controller-manager [03ce0954301e] <==
	I0923 13:15:47.812782       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-560300-m02\" does not exist"
	I0923 13:15:47.833513       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-560300-m02" podCIDRs=["10.244.1.0/24"]
	I0923 13:15:47.833556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:15:47.833579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:15:47.854689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:15:48.074710       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:15:48.577567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:15:49.029149       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-560300-m02"
	I0923 13:15:49.147773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:15:58.288452       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:16:18.163211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:16:18.163593       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:16:18.178808       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:16:19.052660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:16:41.119030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="77.219959ms"
	I0923 13:16:41.132046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.958015ms"
	I0923 13:16:41.132539       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.405µs"
	I0923 13:16:41.139508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.602µs"
	I0923 13:16:41.140167       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.502µs"
	I0923 13:16:44.093781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.007132ms"
	I0923 13:16:44.095098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.802µs"
	I0923 13:16:44.609184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.274121ms"
	I0923 13:16:44.609321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.903µs"
	I0923 13:16:49.425840       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:17:00.433847       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300"
	
	
	==> kube-proxy [c92a84f5caf2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:13:01.510581       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:13:01.528211       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.153.215"]
	E0923 13:13:01.528393       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:13:01.595991       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:13:01.596175       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:13:01.596207       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:13:01.601897       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:13:01.602395       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:13:01.602427       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:13:01.610743       1 config.go:199] "Starting service config controller"
	I0923 13:13:01.610798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:13:01.610828       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:13:01.610834       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:13:01.612235       1 config.go:328] "Starting node config controller"
	I0923 13:13:01.612451       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:13:01.710868       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:13:01.711136       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:13:01.712783       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [117d706d07d2] <==
	W0923 13:12:52.395300       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 13:12:52.395522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.490447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:12:52.490806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.548160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.548442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.602117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.602162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.677098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:12:52.677310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.689862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:12:52.690136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.707741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:12:52.707845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.743202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.743233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.840286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:12:52.840633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.860952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:12:52.861450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.904935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:12:52.905322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.968156       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:12:52.968278       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 13:12:55.111169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:13:22 multinode-560300 kubelet[2226]: I0923 13:13:22.947293    2226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb12eb8fe1eab6bc65e5f4fd56be7516f7adc5ad5436b16ef67fa13648765407"
	Sep 23 13:13:22 multinode-560300 kubelet[2226]: I0923 13:13:22.951522    2226 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="544604cdd801700da010dbe0f8891d8a0475a9ab19a6595fc16f00b0d720e931"
	Sep 23 13:13:23 multinode-560300 kubelet[2226]: I0923 13:13:23.988171    2226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.988156592 podStartE2EDuration="16.988156592s" podCreationTimestamp="2024-09-23 13:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-23 13:13:23.987994482 +0000 UTC m=+29.420360499" watchObservedRunningTime="2024-09-23 13:13:23.988156592 +0000 UTC m=+29.420522609"
	Sep 23 13:13:54 multinode-560300 kubelet[2226]: E0923 13:13:54.782169    2226 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:13:54 multinode-560300 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:13:54 multinode-560300 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:13:54 multinode-560300 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:13:54 multinode-560300 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:14:54 multinode-560300 kubelet[2226]: E0923 13:14:54.786322    2226 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:14:54 multinode-560300 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:14:54 multinode-560300 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:14:54 multinode-560300 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:14:54 multinode-560300 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:15:54 multinode-560300 kubelet[2226]: E0923 13:15:54.781005    2226 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:15:54 multinode-560300 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:15:54 multinode-560300 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:15:54 multinode-560300 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:15:54 multinode-560300 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:16:41 multinode-560300 kubelet[2226]: I0923 13:16:41.112992    2226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-glx94" podStartSLOduration=221.112974564 podStartE2EDuration="3m41.112974564s" podCreationTimestamp="2024-09-23 13:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-23 13:13:24.005666564 +0000 UTC m=+29.438032581" watchObservedRunningTime="2024-09-23 13:16:41.112974564 +0000 UTC m=+226.545340581"
	Sep 23 13:16:41 multinode-560300 kubelet[2226]: I0923 13:16:41.157736    2226 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9n7w\" (UniqueName: \"kubernetes.io/projected/5dc4d731-6160-4e1d-b62d-508cb342a308-kube-api-access-r9n7w\") pod \"busybox-7dff88458-wwgwh\" (UID: \"5dc4d731-6160-4e1d-b62d-508cb342a308\") " pod="default/busybox-7dff88458-wwgwh"
	Sep 23 13:16:54 multinode-560300 kubelet[2226]: E0923 13:16:54.782584    2226 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:16:54 multinode-560300 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:16:54 multinode-560300 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:16:54 multinode-560300 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:16:54 multinode-560300 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-560300 -n multinode-560300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-560300 -n multinode-560300: (10.3816948s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-560300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (51.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (539.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-560300
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-560300
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-560300: (1m31.4360246s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-560300 --wait=true -v=8 --alsologtostderr
E0923 13:33:17.110375    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:35:30.056070    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:36:20.208241    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:38:17.130700    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-560300 --wait=true -v=8 --alsologtostderr: (6m56.7951073s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-560300
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-560300	172.19.153.215
multinode-560300-m02	172.19.147.68
multinode-560300-m03	172.19.154.147

                                                
                                                
After restart: multinode-560300	172.19.156.56
multinode-560300-m02	172.19.147.0
multinode-560300-m03	172.19.145.249
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-560300 -n multinode-560300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-560300 -n multinode-560300: (10.4277743s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 logs -n 25: (7.8382252s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:23 UTC | 23 Sep 24 13:24 UTC |
	|         | multinode-560300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | multinode-560300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | multinode-560300:/home/docker/cp-test_multinode-560300-m02_multinode-560300.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | multinode-560300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n multinode-560300 sudo cat                                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | /home/docker/cp-test_multinode-560300-m02_multinode-560300.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m03:/home/docker/cp-test_multinode-560300-m02_multinode-560300-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n multinode-560300-m03 sudo cat                                                                   | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | /home/docker/cp-test_multinode-560300-m02_multinode-560300-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp testdata\cp-test.txt                                                                                | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:26 UTC |
	|         | multinode-560300:/home/docker/cp-test_multinode-560300-m03_multinode-560300.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | multinode-560300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n multinode-560300 sudo cat                                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | /home/docker/cp-test_multinode-560300-m03_multinode-560300.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | multinode-560300-m02:/home/docker/cp-test_multinode-560300-m03_multinode-560300-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | multinode-560300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n multinode-560300-m02 sudo cat                                                                   | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | /home/docker/cp-test_multinode-560300-m03_multinode-560300-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-560300 node stop m03                                                                                          | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:27 UTC |
	| node    | multinode-560300 node start                                                                                             | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                              |                  |                   |         |                     |                     |
	| node    | list -p multinode-560300                                                                                                | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:30 UTC |                     |
	| stop    | -p multinode-560300                                                                                                     | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:30 UTC | 23 Sep 24 13:32 UTC |
	| start   | -p multinode-560300                                                                                                     | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:32 UTC | 23 Sep 24 13:39 UTC |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	| node    | list -p multinode-560300                                                                                                | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:39 UTC |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:32:21
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:32:21.077470    7084 out.go:345] Setting OutFile to fd 1800 ...
	I0923 13:32:21.120826    7084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:32:21.120826    7084 out.go:358] Setting ErrFile to fd 2004...
	I0923 13:32:21.120826    7084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:32:21.138833    7084 out.go:352] Setting JSON to false
	I0923 13:32:21.141842    7084 start.go:129] hostinfo: {"hostname":"minikube5","uptime":494317,"bootTime":1726604024,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 13:32:21.141842    7084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 13:32:21.304695    7084 out.go:177] * [multinode-560300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 13:32:21.337762    7084 notify.go:220] Checking for updates...
	I0923 13:32:21.399695    7084 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:32:21.436422    7084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:32:21.497690    7084 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 13:32:21.515821    7084 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:32:21.542784    7084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:32:21.549606    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:32:21.550084    7084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:32:26.386396    7084 out.go:177] * Using the hyperv driver based on existing profile
	I0923 13:32:26.454832    7084 start.go:297] selected driver: hyperv
	I0923 13:32:26.455626    7084 start.go:901] validating driver "hyperv" against &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:32:26.456121    7084 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:32:26.504002    7084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:32:26.504245    7084 cni.go:84] Creating CNI manager for ""
	I0923 13:32:26.504245    7084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:32:26.504245    7084 start.go:340] cluster config:
	{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:fals
e kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:32:26.504785    7084 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:32:26.595332    7084 out.go:177] * Starting "multinode-560300" primary control-plane node in "multinode-560300" cluster
	I0923 13:32:26.603420    7084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:32:26.604162    7084 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 13:32:26.604162    7084 cache.go:56] Caching tarball of preloaded images
	I0923 13:32:26.604162    7084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 13:32:26.604699    7084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:32:26.604939    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:32:26.607010    7084 start.go:360] acquireMachinesLock for multinode-560300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:32:26.607010    7084 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-560300"
	I0923 13:32:26.607010    7084 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:32:26.607668    7084 fix.go:54] fixHost starting: 
	I0923 13:32:26.607820    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:28.931008    7084 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 13:32:28.931008    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:28.931008    7084 fix.go:112] recreateIfNeeded on multinode-560300: state=Stopped err=<nil>
	W0923 13:32:28.931212    7084 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:32:28.962326    7084 out.go:177] * Restarting existing hyperv VM for "multinode-560300" ...
	I0923 13:32:29.037593    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-560300
	I0923 13:32:31.897705    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:31.897911    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:31.897911    7084 main.go:141] libmachine: Waiting for host to start...
	I0923 13:32:31.897911    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:33.829108    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:33.829291    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:33.829417    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:36.029491    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:36.029491    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:37.030143    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:38.936377    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:38.936830    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:38.936912    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:41.086494    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:41.087314    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:42.087715    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:43.988407    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:43.988407    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:43.988502    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:46.248921    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:46.248921    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:47.250009    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:49.179692    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:49.180270    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:49.180428    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:51.322828    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:51.322899    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:52.323949    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:54.259567    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:54.259567    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:54.259567    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:56.533613    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:32:56.533613    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:56.535789    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:58.397922    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:58.397922    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:58.398507    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:00.583635    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:00.583635    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:00.584427    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:33:00.586398    7084 machine.go:93] provisionDockerMachine start ...
	I0923 13:33:00.586581    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:02.463986    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:02.463986    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:02.464846    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:04.746586    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:04.746586    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:04.753172    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:04.753717    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:04.753818    7084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:33:04.879125    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:33:04.879125    7084 buildroot.go:166] provisioning hostname "multinode-560300"
	I0923 13:33:04.879125    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:06.761214    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:06.762254    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:06.762254    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:08.978693    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:08.979536    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:08.984918    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:08.985559    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:08.985559    7084 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-560300 && echo "multinode-560300" | sudo tee /etc/hostname
	I0923 13:33:09.142948    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-560300
	
	I0923 13:33:09.142948    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:11.001229    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:11.001229    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:11.001320    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:13.226165    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:13.227061    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:13.231080    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:13.231131    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:13.231131    7084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-560300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-560300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-560300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:33:13.373260    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:33:13.373260    7084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 13:33:13.373260    7084 buildroot.go:174] setting up certificates
	I0923 13:33:13.373260    7084 provision.go:84] configureAuth start
	I0923 13:33:13.373260    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:15.201988    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:15.201988    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:15.202342    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:17.402278    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:17.402278    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:17.402871    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:19.300327    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:19.300327    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:19.300327    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:21.573859    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:21.573859    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:21.574420    7084 provision.go:143] copyHostCerts
	I0923 13:33:21.574420    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 13:33:21.574420    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 13:33:21.574420    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 13:33:21.575010    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 13:33:21.576201    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 13:33:21.576262    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 13:33:21.576262    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 13:33:21.576262    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 13:33:21.576979    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 13:33:21.576979    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 13:33:21.577521    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 13:33:21.577701    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 13:33:21.578304    7084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-560300 san=[127.0.0.1 172.19.156.56 localhost minikube multinode-560300]
	I0923 13:33:21.692877    7084 provision.go:177] copyRemoteCerts
	I0923 13:33:21.702196    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:33:21.702196    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:23.560049    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:23.560049    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:23.560049    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:25.800955    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:25.800955    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:25.801923    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:33:25.914863    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2123829s)
	I0923 13:33:25.914863    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 13:33:25.916857    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:33:25.961787    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 13:33:25.962144    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 13:33:26.012256    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 13:33:26.012899    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:33:26.052950    7084 provision.go:87] duration metric: took 12.6788336s to configureAuth
	I0923 13:33:26.052950    7084 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:33:26.054594    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:33:26.054594    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:27.926522    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:27.926522    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:27.926827    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:30.174752    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:30.174752    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:30.178960    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:30.179481    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:30.179481    7084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:33:30.318782    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 13:33:30.318782    7084 buildroot.go:70] root file system type: tmpfs
	I0923 13:33:30.319162    7084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:33:30.319195    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:32.120519    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:32.120519    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:32.121014    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:34.386700    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:34.386700    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:34.390685    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:34.390751    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:34.390751    7084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:33:34.546922    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:33:34.547036    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:36.447946    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:36.447946    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:36.448039    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:38.660754    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:38.660754    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:38.664208    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:38.664887    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:38.664887    7084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:33:41.039207    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 13:33:41.039207    7084 machine.go:96] duration metric: took 40.4500041s to provisionDockerMachine
	I0923 13:33:41.039207    7084 start.go:293] postStartSetup for "multinode-560300" (driver="hyperv")
	I0923 13:33:41.039207    7084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:33:41.051200    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:33:41.051200    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:42.891677    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:42.891677    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:42.891677    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:45.148609    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:45.149694    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:45.150450    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:33:45.264997    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2135117s)
	I0923 13:33:45.275085    7084 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:33:45.284037    7084 command_runner.go:130] > NAME=Buildroot
	I0923 13:33:45.284037    7084 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:33:45.284037    7084 command_runner.go:130] > ID=buildroot
	I0923 13:33:45.284037    7084 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:33:45.284037    7084 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:33:45.284037    7084 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:33:45.284037    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 13:33:45.285024    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 13:33:45.285836    7084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 13:33:45.285881    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 13:33:45.294676    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:33:45.316241    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 13:33:45.358064    7084 start.go:296] duration metric: took 4.3185041s for postStartSetup
	I0923 13:33:45.358064    7084 fix.go:56] duration metric: took 1m18.7457376s for fixHost
	I0923 13:33:45.358209    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:47.220169    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:47.220169    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:47.220169    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:49.411952    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:49.411952    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:49.416513    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:49.417187    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:49.417187    7084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:33:49.542819    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098429.758532825
	
	I0923 13:33:49.542863    7084 fix.go:216] guest clock: 1727098429.758532825
	I0923 13:33:49.542950    7084 fix.go:229] Guest: 2024-09-23 13:33:49.758532825 +0000 UTC Remote: 2024-09-23 13:33:45.3580642 +0000 UTC m=+84.351991701 (delta=4.400468625s)
	I0923 13:33:49.543061    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:51.404131    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:51.404131    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:51.404349    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:53.636941    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:53.636993    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:53.641109    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:53.641722    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:53.641722    7084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727098429
	I0923 13:33:53.786596    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 13:33:49 UTC 2024
	
	I0923 13:33:53.786596    7084 fix.go:236] clock set: Mon Sep 23 13:33:49 UTC 2024
	 (err=<nil>)
	I0923 13:33:53.786596    7084 start.go:83] releasing machines lock for "multinode-560300", held for 1m27.1737011s
	I0923 13:33:53.787741    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:55.645266    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:55.645266    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:55.645915    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:57.882612    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:57.883276    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:57.887510    7084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 13:33:57.887783    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:57.897674    7084 ssh_runner.go:195] Run: cat /version.json
	I0923 13:33:57.897674    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:59.833368    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:59.833368    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:59.834455    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:59.835406    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:59.835579    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:59.835658    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:34:02.218651    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:34:02.218651    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:02.219006    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:34:02.250859    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:34:02.250859    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:02.252002    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:34:02.312549    7084 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 13:34:02.312549    7084 ssh_runner.go:235] Completed: cat /version.json: (4.4145776s)
	I0923 13:34:02.321365    7084 ssh_runner.go:195] Run: systemctl --version
	I0923 13:34:02.326814    7084 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 13:34:02.326923    7084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.439064s)
	W0923 13:34:02.327010    7084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 13:34:02.336422    7084 command_runner.go:130] > systemd 252 (252)
	I0923 13:34:02.336422    7084 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 13:34:02.345505    7084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:34:02.356355    7084 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 13:34:02.356462    7084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:34:02.364924    7084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:34:02.392725    7084 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 13:34:02.392725    7084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:34:02.392725    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:34:02.392725    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:34:02.427122    7084 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 13:34:02.438070    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0923 13:34:02.453604    7084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 13:34:02.453604    7084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 13:34:02.468493    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 13:34:02.487256    7084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:34:02.498433    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:34:02.525577    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:34:02.551661    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:34:02.581018    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:34:02.607714    7084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:34:02.637144    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:34:02.662769    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:34:02.691865    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:34:02.719612    7084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:34:02.735756    7084 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:34:02.735831    7084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:34:02.743936    7084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:34:02.772496    7084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:34:02.799629    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:02.996275    7084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:34:03.027524    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:34:03.038085    7084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:34:03.055051    7084 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 13:34:03.055051    7084 command_runner.go:130] > [Unit]
	I0923 13:34:03.055051    7084 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 13:34:03.055051    7084 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 13:34:03.055051    7084 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 13:34:03.055051    7084 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 13:34:03.055051    7084 command_runner.go:130] > StartLimitBurst=3
	I0923 13:34:03.055051    7084 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 13:34:03.055051    7084 command_runner.go:130] > [Service]
	I0923 13:34:03.055051    7084 command_runner.go:130] > Type=notify
	I0923 13:34:03.055051    7084 command_runner.go:130] > Restart=on-failure
	I0923 13:34:03.055051    7084 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 13:34:03.055051    7084 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 13:34:03.055051    7084 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 13:34:03.055051    7084 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 13:34:03.055051    7084 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 13:34:03.055051    7084 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 13:34:03.055051    7084 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 13:34:03.055051    7084 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 13:34:03.055051    7084 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 13:34:03.055051    7084 command_runner.go:130] > ExecStart=
	I0923 13:34:03.055051    7084 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 13:34:03.055051    7084 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 13:34:03.055051    7084 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 13:34:03.055051    7084 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 13:34:03.055051    7084 command_runner.go:130] > LimitNOFILE=infinity
	I0923 13:34:03.055051    7084 command_runner.go:130] > LimitNPROC=infinity
	I0923 13:34:03.055051    7084 command_runner.go:130] > LimitCORE=infinity
	I0923 13:34:03.055051    7084 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 13:34:03.055051    7084 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 13:34:03.055051    7084 command_runner.go:130] > TasksMax=infinity
	I0923 13:34:03.055051    7084 command_runner.go:130] > TimeoutStartSec=0
	I0923 13:34:03.055051    7084 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 13:34:03.055051    7084 command_runner.go:130] > Delegate=yes
	I0923 13:34:03.055051    7084 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 13:34:03.055051    7084 command_runner.go:130] > KillMode=process
	I0923 13:34:03.055051    7084 command_runner.go:130] > [Install]
	I0923 13:34:03.055051    7084 command_runner.go:130] > WantedBy=multi-user.target
	I0923 13:34:03.063456    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:34:03.094077    7084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:34:03.127259    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:34:03.158977    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:34:03.195325    7084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:34:03.257774    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:34:03.279520    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:34:03.314698    7084 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 13:34:03.327269    7084 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:34:03.332386    7084 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 13:34:03.342589    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:34:03.358220    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:34:03.394219    7084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:34:03.563399    7084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:34:03.729091    7084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:34:03.729434    7084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:34:03.767051    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:03.929222    7084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:34:06.586938    7084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6574266s)
	I0923 13:34:06.597697    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:34:06.630170    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:34:06.664893    7084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:34:06.871108    7084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:34:07.053776    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:07.237240    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:34:07.287988    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:34:07.320747    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:07.518626    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:34:07.614002    7084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:34:07.624818    7084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:34:07.632612    7084 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 13:34:07.632675    7084 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:34:07.632675    7084 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0923 13:34:07.632675    7084 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 13:34:07.632675    7084 command_runner.go:130] > Access: 2024-09-23 13:34:07.759485353 +0000
	I0923 13:34:07.632790    7084 command_runner.go:130] > Modify: 2024-09-23 13:34:07.759485353 +0000
	I0923 13:34:07.632790    7084 command_runner.go:130] > Change: 2024-09-23 13:34:07.762485770 +0000
	I0923 13:34:07.632847    7084 command_runner.go:130] >  Birth: -
	I0923 13:34:07.632847    7084 start.go:563] Will wait 60s for crictl version
	I0923 13:34:07.643244    7084 ssh_runner.go:195] Run: which crictl
	I0923 13:34:07.649449    7084 command_runner.go:130] > /usr/bin/crictl
	I0923 13:34:07.656733    7084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:34:07.702506    7084 command_runner.go:130] > Version:  0.1.0
	I0923 13:34:07.702506    7084 command_runner.go:130] > RuntimeName:  docker
	I0923 13:34:07.702506    7084 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 13:34:07.702506    7084 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:34:07.704163    7084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:34:07.713326    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:34:07.743233    7084 command_runner.go:130] > 27.3.0
	I0923 13:34:07.752143    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:34:07.781605    7084 command_runner.go:130] > 27.3.0
	I0923 13:34:07.785549    7084 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:34:07.785711    7084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 13:34:07.790401    7084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 13:34:07.791351    7084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 13:34:07.791351    7084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 13:34:07.791351    7084 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 13:34:07.793230    7084 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 13:34:07.793230    7084 ip.go:214] interface addr: 172.19.144.1/20
	I0923 13:34:07.802035    7084 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 13:34:07.807457    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:34:07.827413    7084 kubeadm.go:883] updating cluster {Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:34:07.827693    7084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:34:07.835009    7084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 13:34:07.858817    7084 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 13:34:07.859026    7084 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 13:34:07.859084    7084 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:34:07.859084    7084 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0923 13:34:07.859137    7084 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0923 13:34:07.859137    7084 docker.go:615] Images already preloaded, skipping extraction
	I0923 13:34:07.866287    7084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 13:34:07.888562    7084 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 13:34:07.888562    7084 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:34:07.888562    7084 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0923 13:34:07.889701    7084 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0923 13:34:07.889780    7084 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:34:07.889780    7084 kubeadm.go:934] updating node { 172.19.156.56 8443 v1.31.1 docker true true} ...
	I0923 13:34:07.890121    7084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-560300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.156.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:34:07.896568    7084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 13:34:07.954477    7084 command_runner.go:130] > cgroupfs
	I0923 13:34:07.954740    7084 cni.go:84] Creating CNI manager for ""
	I0923 13:34:07.954826    7084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:34:07.954826    7084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:34:07.954826    7084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.156.56 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-560300 NodeName:multinode-560300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.156.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.156.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:34:07.954826    7084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.156.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-560300"
	  kubeletExtraArgs:
	    node-ip: 172.19.156.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.156.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:34:07.966573    7084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:34:07.986005    7084 command_runner.go:130] > kubeadm
	I0923 13:34:07.986061    7084 command_runner.go:130] > kubectl
	I0923 13:34:07.986061    7084 command_runner.go:130] > kubelet
	I0923 13:34:07.986098    7084 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:34:07.997622    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:34:08.014362    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0923 13:34:08.045056    7084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:34:08.078517    7084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0923 13:34:08.125522    7084 ssh_runner.go:195] Run: grep 172.19.156.56	control-plane.minikube.internal$ /etc/hosts
	I0923 13:34:08.132791    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.156.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:34:08.162805    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:08.348528    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:34:08.374828    7084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300 for IP: 172.19.156.56
	I0923 13:34:08.374909    7084 certs.go:194] generating shared ca certs ...
	I0923 13:34:08.374979    7084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:08.375957    7084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 13:34:08.376469    7084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 13:34:08.376793    7084 certs.go:256] generating profile certs ...
	I0923 13:34:08.377535    7084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\client.key
	I0923 13:34:08.377685    7084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.970a6c31
	I0923 13:34:08.377827    7084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.970a6c31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.156.56]
	I0923 13:34:08.789088    7084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.970a6c31 ...
	I0923 13:34:08.789088    7084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.970a6c31: {Name:mk8a3149834e23c491bffc14de1904277923a2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:08.791190    7084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.970a6c31 ...
	I0923 13:34:08.791190    7084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.970a6c31: {Name:mk5029a77e212f26c295dbd92ef64b74432c8110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:08.792674    7084 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.970a6c31 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt
	I0923 13:34:08.804212    7084 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.970a6c31 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key
	I0923 13:34:08.805206    7084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key
	I0923 13:34:08.805206    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:34:08.805545    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:34:08.805688    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:34:08.805688    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:34:08.805688    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:34:08.806377    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:34:08.806694    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:34:08.806833    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:34:08.807020    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 13:34:08.807399    7084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 13:34:08.807399    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 13:34:08.807738    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 13:34:08.807887    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 13:34:08.807887    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 13:34:08.808415    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 13:34:08.808558    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:08.808704    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 13:34:08.808704    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 13:34:08.809339    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:34:08.859222    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 13:34:08.902815    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:34:08.946297    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:34:08.989564    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 13:34:09.032968    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:34:09.077214    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:34:09.121073    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:34:09.166875    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:34:09.212415    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 13:34:09.252552    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 13:34:09.286282    7084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:34:09.322523    7084 ssh_runner.go:195] Run: openssl version
	I0923 13:34:09.330950    7084 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:34:09.343533    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:34:09.370858    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:09.376858    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:09.376858    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:09.385636    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:09.394363    7084 command_runner.go:130] > b5213941
	I0923 13:34:09.402497    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:34:09.429740    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 13:34:09.455218    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 13:34:09.463663    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:34:09.463663    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:34:09.472320    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 13:34:09.479165    7084 command_runner.go:130] > 51391683
	I0923 13:34:09.488860    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 13:34:09.514789    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 13:34:09.539787    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 13:34:09.546758    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:34:09.546758    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:34:09.554466    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 13:34:09.561466    7084 command_runner.go:130] > 3ec20f2e
	I0923 13:34:09.569467    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:34:09.597930    7084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:34:09.604524    7084 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:34:09.604524    7084 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 13:34:09.604524    7084 command_runner.go:130] > Device: 8,1	Inode: 4194087     Links: 1
	I0923 13:34:09.604524    7084 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:34:09.604524    7084 command_runner.go:130] > Access: 2024-09-23 13:12:43.705183234 +0000
	I0923 13:34:09.604524    7084 command_runner.go:130] > Modify: 2024-09-23 13:12:43.705183234 +0000
	I0923 13:34:09.604524    7084 command_runner.go:130] > Change: 2024-09-23 13:12:43.705183234 +0000
	I0923 13:34:09.604524    7084 command_runner.go:130] >  Birth: 2024-09-23 13:12:43.705183234 +0000
	I0923 13:34:09.612933    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:34:09.621357    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.629009    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:34:09.637792    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.645891    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:34:09.655333    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.663552    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:34:09.672463    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.680531    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:34:09.689451    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.697535    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:34:09.706286    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.706483    7084 kubeadm.go:392] StartCluster: {Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:34:09.717138    7084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 13:34:09.749508    7084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:34:09.765836    7084 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0923 13:34:09.765836    7084 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0923 13:34:09.765836    7084 command_runner.go:130] > /var/lib/minikube/etcd:
	I0923 13:34:09.765836    7084 command_runner.go:130] > member
	I0923 13:34:09.765836    7084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 13:34:09.765836    7084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 13:34:09.773606    7084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 13:34:09.790325    7084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:34:09.791618    7084 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-560300" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:34:09.792623    7084 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-560300" cluster setting kubeconfig missing "multinode-560300" context setting]
	I0923 13:34:09.794624    7084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:09.810681    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:34:09.811285    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:34:09.812469    7084 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 13:34:09.820220    7084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 13:34:09.835787    7084 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0923 13:34:09.835787    7084 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0923 13:34:09.835787    7084 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0923 13:34:09.835787    7084 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0923 13:34:09.836739    7084 command_runner.go:130] >  kind: InitConfiguration
	I0923 13:34:09.836739    7084 command_runner.go:130] >  localAPIEndpoint:
	I0923 13:34:09.836739    7084 command_runner.go:130] > -  advertiseAddress: 172.19.153.215
	I0923 13:34:09.836739    7084 command_runner.go:130] > +  advertiseAddress: 172.19.156.56
	I0923 13:34:09.836788    7084 command_runner.go:130] >    bindPort: 8443
	I0923 13:34:09.836788    7084 command_runner.go:130] >  bootstrapTokens:
	I0923 13:34:09.836788    7084 command_runner.go:130] >    - groups:
	I0923 13:34:09.836788    7084 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0923 13:34:09.836788    7084 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0923 13:34:09.836788    7084 command_runner.go:130] >    name: "multinode-560300"
	I0923 13:34:09.836788    7084 command_runner.go:130] >    kubeletExtraArgs:
	I0923 13:34:09.836788    7084 command_runner.go:130] > -    node-ip: 172.19.153.215
	I0923 13:34:09.836788    7084 command_runner.go:130] > +    node-ip: 172.19.156.56
	I0923 13:34:09.836788    7084 command_runner.go:130] >    taints: []
	I0923 13:34:09.836893    7084 command_runner.go:130] >  ---
	I0923 13:34:09.836893    7084 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0923 13:34:09.836893    7084 command_runner.go:130] >  kind: ClusterConfiguration
	I0923 13:34:09.836893    7084 command_runner.go:130] >  apiServer:
	I0923 13:34:09.836964    7084 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.153.215"]
	I0923 13:34:09.836964    7084 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.156.56"]
	I0923 13:34:09.836964    7084 command_runner.go:130] >    extraArgs:
	I0923 13:34:09.836964    7084 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0923 13:34:09.837049    7084 command_runner.go:130] >  controllerManager:
	I0923 13:34:09.837076    7084 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.153.215
	+  advertiseAddress: 172.19.156.56
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-560300"
	   kubeletExtraArgs:
	-    node-ip: 172.19.153.215
	+    node-ip: 172.19.156.56
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.153.215"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.156.56"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0923 13:34:09.837076    7084 kubeadm.go:1160] stopping kube-system containers ...
	I0923 13:34:09.842735    7084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 13:34:09.870308    7084 command_runner.go:130] > 648460d0f31f
	I0923 13:34:09.870350    7084 command_runner.go:130] > b07ca5858154
	I0923 13:34:09.870350    7084 command_runner.go:130] > eb12eb8fe1ea
	I0923 13:34:09.870350    7084 command_runner.go:130] > 544604cdd801
	I0923 13:34:09.870350    7084 command_runner.go:130] > a83589d1098a
	I0923 13:34:09.870350    7084 command_runner.go:130] > c92a84f5caf2
	I0923 13:34:09.870350    7084 command_runner.go:130] > cf2fc1e61774
	I0923 13:34:09.870350    7084 command_runner.go:130] > 0f322d00a55b
	I0923 13:34:09.870350    7084 command_runner.go:130] > 90116ded443d
	I0923 13:34:09.870350    7084 command_runner.go:130] > 117d706d07d2
	I0923 13:34:09.870350    7084 command_runner.go:130] > 03ce0954301e
	I0923 13:34:09.870511    7084 command_runner.go:130] > 8ab41eeaea91
	I0923 13:34:09.870579    7084 command_runner.go:130] > 7c23acc78f4c
	I0923 13:34:09.870579    7084 command_runner.go:130] > 67b7e79ad6b5
	I0923 13:34:09.870579    7084 command_runner.go:130] > b160f7a7a5d2
	I0923 13:34:09.870579    7084 command_runner.go:130] > 6ef47416b046
	I0923 13:34:09.870672    7084 docker.go:483] Stopping containers: [648460d0f31f b07ca5858154 eb12eb8fe1ea 544604cdd801 a83589d1098a c92a84f5caf2 cf2fc1e61774 0f322d00a55b 90116ded443d 117d706d07d2 03ce0954301e 8ab41eeaea91 7c23acc78f4c 67b7e79ad6b5 b160f7a7a5d2 6ef47416b046]
	I0923 13:34:09.879657    7084 ssh_runner.go:195] Run: docker stop 648460d0f31f b07ca5858154 eb12eb8fe1ea 544604cdd801 a83589d1098a c92a84f5caf2 cf2fc1e61774 0f322d00a55b 90116ded443d 117d706d07d2 03ce0954301e 8ab41eeaea91 7c23acc78f4c 67b7e79ad6b5 b160f7a7a5d2 6ef47416b046
	I0923 13:34:09.905040    7084 command_runner.go:130] > 648460d0f31f
	I0923 13:34:09.905295    7084 command_runner.go:130] > b07ca5858154
	I0923 13:34:09.905295    7084 command_runner.go:130] > eb12eb8fe1ea
	I0923 13:34:09.905295    7084 command_runner.go:130] > 544604cdd801
	I0923 13:34:09.905295    7084 command_runner.go:130] > a83589d1098a
	I0923 13:34:09.905295    7084 command_runner.go:130] > c92a84f5caf2
	I0923 13:34:09.905295    7084 command_runner.go:130] > cf2fc1e61774
	I0923 13:34:09.905295    7084 command_runner.go:130] > 0f322d00a55b
	I0923 13:34:09.905295    7084 command_runner.go:130] > 90116ded443d
	I0923 13:34:09.905295    7084 command_runner.go:130] > 117d706d07d2
	I0923 13:34:09.905295    7084 command_runner.go:130] > 03ce0954301e
	I0923 13:34:09.905295    7084 command_runner.go:130] > 8ab41eeaea91
	I0923 13:34:09.905295    7084 command_runner.go:130] > 7c23acc78f4c
	I0923 13:34:09.905295    7084 command_runner.go:130] > 67b7e79ad6b5
	I0923 13:34:09.905295    7084 command_runner.go:130] > b160f7a7a5d2
	I0923 13:34:09.905295    7084 command_runner.go:130] > 6ef47416b046
	I0923 13:34:09.914434    7084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 13:34:09.955877    7084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:34:09.972508    7084 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0923 13:34:09.972625    7084 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0923 13:34:09.972755    7084 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0923 13:34:09.972823    7084 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:34:09.972895    7084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:34:09.972976    7084 kubeadm.go:157] found existing configuration files:
	
	I0923 13:34:09.983623    7084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:34:09.999345    7084 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:34:09.999345    7084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:34:10.007696    7084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:34:10.035610    7084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:34:10.053684    7084 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:34:10.053754    7084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:34:10.062195    7084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:34:10.086608    7084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:34:10.102235    7084 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:34:10.102235    7084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:34:10.110095    7084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:34:10.135038    7084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:34:10.150701    7084 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:34:10.150701    7084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:34:10.159668    7084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:34:10.183290    7084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:34:10.199857    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:10.390619    7084 command_runner.go:130] ! W0923 13:34:10.608971    1591 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:10.391440    7084 command_runner.go:130] ! W0923 13:34:10.610043    1591 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using the existing "sa" key
	I0923 13:34:10.402424    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:10.458239    7084 command_runner.go:130] ! W0923 13:34:10.677109    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:10.459404    7084 command_runner.go:130] ! W0923 13:34:10.677844    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:34:12.312082    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9095285s)
	I0923 13:34:12.312082    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:12.368439    7084 command_runner.go:130] ! W0923 13:34:12.587045    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.369369    7084 command_runner.go:130] ! W0923 13:34:12.588099    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.571209    7084 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:34:12.571302    7084 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:34:12.571379    7084 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 13:34:12.571450    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:12.634265    7084 command_runner.go:130] ! W0923 13:34:12.852327    1629 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.635062    7084 command_runner.go:130] ! W0923 13:34:12.853020    1629 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.655266    7084 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:34:12.655266    7084 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:34:12.655266    7084 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:34:12.655266    7084 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:34:12.655266    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:12.741853    7084 command_runner.go:130] ! W0923 13:34:12.960044    1636 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.742145    7084 command_runner.go:130] ! W0923 13:34:12.960940    1636 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.766906    7084 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:34:12.767012    7084 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:34:12.776277    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:13.279799    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:13.779079    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:14.278375    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:14.778571    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:14.802648    7084 command_runner.go:130] > 1960
	I0923 13:34:14.802727    7084 api_server.go:72] duration metric: took 2.0355779s to wait for apiserver process to appear ...
	I0923 13:34:14.802727    7084 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:34:14.802828    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:17.720157    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 13:34:17.720300    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 13:34:17.720300    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:17.850795    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:34:17.850795    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:34:17.850795    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:17.859222    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:34:17.859290    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:34:18.303501    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:18.312344    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:34:18.312389    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:34:18.803361    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:18.825418    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:34:18.825418    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:34:19.303692    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:19.315167    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 200:
	ok
	I0923 13:34:19.316401    7084 round_trippers.go:463] GET https://172.19.156.56:8443/version
	I0923 13:34:19.316401    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:19.316401    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:19.316401    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:19.328710    7084 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0923 13:34:19.328710    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:19.328710    7084 round_trippers.go:580]     Audit-Id: f4401bf3-2600-430b-8f13-521935b5c441
	I0923 13:34:19.328710    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:19.328778    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:19.328778    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:19.328778    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:19.328778    7084 round_trippers.go:580]     Content-Length: 263
	I0923 13:34:19.328778    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:19 GMT
	I0923 13:34:19.328838    7084 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 13:34:19.328997    7084 api_server.go:141] control plane version: v1.31.1
	I0923 13:34:19.329096    7084 api_server.go:131] duration metric: took 4.5260638s to wait for apiserver health ...
	I0923 13:34:19.329096    7084 cni.go:84] Creating CNI manager for ""
	I0923 13:34:19.329096    7084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:34:19.333287    7084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 13:34:19.345878    7084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 13:34:19.356777    7084 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0923 13:34:19.356899    7084 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0923 13:34:19.356899    7084 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0923 13:34:19.356899    7084 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:34:19.356899    7084 command_runner.go:130] > Access: 2024-09-23 13:32:56.102387400 +0000
	I0923 13:34:19.356899    7084 command_runner.go:130] > Modify: 2024-09-20 04:01:25.000000000 +0000
	I0923 13:34:19.356899    7084 command_runner.go:130] > Change: 2024-09-23 13:32:44.533000000 +0000
	I0923 13:34:19.356981    7084 command_runner.go:130] >  Birth: -
	I0923 13:34:19.357113    7084 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 13:34:19.357159    7084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 13:34:19.407720    7084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 13:34:20.589955    7084 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0923 13:34:20.590048    7084 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0923 13:34:20.590048    7084 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0923 13:34:20.590048    7084 command_runner.go:130] > daemonset.apps/kindnet configured
	I0923 13:34:20.590048    7084 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1822476s)
	I0923 13:34:20.590115    7084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:34:20.590265    7084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 13:34:20.590265    7084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 13:34:20.590406    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:20.590406    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:20.590406    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:20.590406    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:20.595684    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:20.595684    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:20.595684    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:20.595684    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:20 GMT
	I0923 13:34:20.595684    7084 round_trippers.go:580]     Audit-Id: 7c5555f4-a150-442e-9746-93fbae5f2377
	I0923 13:34:20.595684    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:20.595684    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:20.595684    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:20.596674    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1779"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91382 chars]
	I0923 13:34:20.602680    7084 system_pods.go:59] 12 kube-system pods found
	I0923 13:34:20.602680    7084 system_pods.go:61] "coredns-7c65d6cfc9-glx94" [f476c8f8-667a-48d4-84f8-4aa15336cea9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 13:34:20.602680    7084 system_pods.go:61] "etcd-multinode-560300" [477ee4f5-e333-4042-97cd-8457f60fd696] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kindnet-mdnmc" [ffaf3266-f3b8-424f-888b-15aff927de53] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kindnet-qg99z" [0f714fff-dd9b-4ba3-b2e9-6e9e18f21ae9] Running
	I0923 13:34:20.602680    7084 system_pods.go:61] "kindnet-z9mrc" [c9dfa12e-54ef-4d0b-825e-bcbcaa77b5a9] Running
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-apiserver-multinode-560300" [c88cb5c4-fe30-4354-bf55-1f281cf11190] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-controller-manager-multinode-560300" [aa0d358b-19fd-4553-8a34-f772ba945019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-proxy-dbkdp" [44a5a18e-0e93-4293-8d4b-13e3ec9acfef] Running
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-proxy-g5t97" [49d7601a-bda4-421e-bde7-acc35c157962] Running
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-proxy-rgmcw" [97050e09-6fc3-4e7b-b00e-07eb9332bf15] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-scheduler-multinode-560300" [01e5d6a3-2eb6-4fa4-8607-072724fb2880] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0923 13:34:20.602680    7084 system_pods.go:61] "storage-provisioner" [444d1029-f19d-4fa6-b454-c9c710e6d9b2] Running
	I0923 13:34:20.602680    7084 system_pods.go:74] duration metric: took 12.5642ms to wait for pod list to return data ...
	I0923 13:34:20.602680    7084 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:34:20.602680    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes
	I0923 13:34:20.602680    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:20.602680    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:20.602680    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:20.610725    7084 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 13:34:20.610725    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:20.610725    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:20.610725    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:20 GMT
	I0923 13:34:20.610725    7084 round_trippers.go:580]     Audit-Id: fcfb7d35-9971-4e1f-9c0e-03a15651ea9b
	I0923 13:34:20.610725    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:20.610725    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:20.610725    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:20.610725    7084 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1779"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16289 chars]
	I0923 13:34:20.611693    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:20.611693    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:20.611693    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:20.611693    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:20.611693    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:20.611693    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:20.611693    7084 node_conditions.go:105] duration metric: took 9.013ms to run NodePressure ...
	I0923 13:34:20.611693    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:20.670713    7084 command_runner.go:130] ! W0923 13:34:20.889559    2282 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:20.678678    7084 command_runner.go:130] ! W0923 13:34:20.898671    2282 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:20.997787    7084 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0923 13:34:20.997864    7084 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0923 13:34:20.997987    7084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0923 13:34:20.998045    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0923 13:34:20.998045    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:20.998045    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:20.998045    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.012671    7084 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 13:34:21.013069    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.013141    7084 round_trippers.go:580]     Audit-Id: 893affd4-36f4-46ab-8603-701e7a588ba9
	I0923 13:34:21.013141    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.013141    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.013141    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.013206    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.013269    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.018681    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1782"},"items":[{"metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1775","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31322 chars]
	I0923 13:34:21.021273    7084 kubeadm.go:739] kubelet initialised
	I0923 13:34:21.021333    7084 kubeadm.go:740] duration metric: took 23.3019ms waiting for restarted kubelet to initialise ...
	I0923 13:34:21.021333    7084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:34:21.021547    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:21.021614    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.021642    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.021642    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.039229    7084 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0923 13:34:21.039698    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.039698    7084 round_trippers.go:580]     Audit-Id: 58f6b66c-77ba-474f-aacc-6d84054438d3
	I0923 13:34:21.039698    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.039698    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.039698    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.039698    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.039783    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.041330    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1782"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91382 chars]
	I0923 13:34:21.044663    7084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.045257    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:21.045257    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.045307    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.045307    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.049912    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:21.049912    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.049912    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.049912    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.049912    7084 round_trippers.go:580]     Audit-Id: ad3bc9e9-21b6-4469-aaf9-2a8956d5985e
	I0923 13:34:21.049912    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.049912    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.049912    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.049912    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:21.050914    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:21.050914    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.050914    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.050914    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.055920    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:21.055920    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.056830    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.056830    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.056830    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.056830    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.056830    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.056830    7084 round_trippers.go:580]     Audit-Id: e350e5a2-6bde-4eb9-9cff-d6ded1f94674
	I0923 13:34:21.057146    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0923 13:34:21.057608    7084 pod_ready.go:98] node "multinode-560300" hosting pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.057667    7084 pod_ready.go:82] duration metric: took 13.0035ms for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.057667    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.057667    7084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.057790    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:34:21.057790    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.057790    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.057790    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.060470    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:21.061255    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.061255    7084 round_trippers.go:580]     Audit-Id: 877d5fc1-3787-4e99-b107-8133e04979ea
	I0923 13:34:21.061255    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.061255    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.061255    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.061255    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.061255    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.061364    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1775","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6830 chars]
	I0923 13:34:21.062017    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:21.062081    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.062081    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.062081    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.064461    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:21.064461    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.064461    7084 round_trippers.go:580]     Audit-Id: d0e5fd5b-ab0a-4729-b4e7-e69a10b6923a
	I0923 13:34:21.064461    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.064461    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.064461    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.064461    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.064461    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.065307    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0923 13:34:21.065748    7084 pod_ready.go:98] node "multinode-560300" hosting pod "etcd-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.065748    7084 pod_ready.go:82] duration metric: took 8.0801ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.065748    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "etcd-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.065748    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.065926    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:34:21.065926    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.066104    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.066104    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.068445    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:21.068445    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.068445    7084 round_trippers.go:580]     Audit-Id: 4e5f9635-0f25-43c8-966b-7b5a2969e11e
	I0923 13:34:21.068445    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.068445    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.068445    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.068445    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.068445    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.069225    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"c88cb5c4-fe30-4354-bf55-1f281cf11190","resourceVersion":"1776","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.156.56:8443","kubernetes.io/config.hash":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.mirror":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.seen":"2024-09-23T13:34:12.942044692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8283 chars]
	I0923 13:34:21.069716    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:21.069779    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.069779    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.069779    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.076082    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:34:21.076172    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.076172    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.076172    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.076172    7084 round_trippers.go:580]     Audit-Id: 1255e212-e963-4415-b94d-4512ffb7dc44
	I0923 13:34:21.076172    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.076172    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.076172    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.076383    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0923 13:34:21.076778    7084 pod_ready.go:98] node "multinode-560300" hosting pod "kube-apiserver-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.076848    7084 pod_ready.go:82] duration metric: took 11.0278ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.076848    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "kube-apiserver-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.076848    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.076977    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:34:21.076977    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.076977    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.076977    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.085480    7084 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 13:34:21.085480    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.085480    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.085480    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.086482    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.086482    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.086506    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.086506    7084 round_trippers.go:580]     Audit-Id: 68f027ce-5b89-4a3e-a19c-f1bb9577d529
	I0923 13:34:21.086770    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"1748","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0923 13:34:21.087335    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:21.087398    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.087398    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.087398    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.089477    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:21.089477    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.089477    7084 round_trippers.go:580]     Audit-Id: 10226438-f03f-49df-ba75-cc0f2be1bbfa
	I0923 13:34:21.089477    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.089477    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.089477    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.089477    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.089477    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.090461    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0923 13:34:21.090461    7084 pod_ready.go:98] node "multinode-560300" hosting pod "kube-controller-manager-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.090461    7084 pod_ready.go:82] duration metric: took 13.6119ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.090461    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "kube-controller-manager-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.090461    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.198560    7084 request.go:632] Waited for 108.0917ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:34:21.198560    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:34:21.198560    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.198560    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.198560    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.202471    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:21.202471    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.202471    7084 round_trippers.go:580]     Audit-Id: 2144e5db-0a95-46ba-8dfb-ef817d4b8680
	I0923 13:34:21.202471    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.202471    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.202471    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.202471    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.202471    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.202471    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dbkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"44a5a18e-0e93-4293-8d4b-13e3ec9acfef","resourceVersion":"1660","creationTimestamp":"2024-09-23T13:20:08Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:20:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0923 13:34:21.398638    7084 request.go:632] Waited for 195.5607ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:34:21.398638    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:34:21.398638    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.398638    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.398638    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.401652    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:21.402035    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.402064    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.402064    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.402064    7084 round_trippers.go:580]     Audit-Id: 3305564d-0a54-49d5-b3ed-f3a6c11f843e
	I0923 13:34:21.402064    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.402064    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.402064    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.402321    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"781efd95-4e81-4850-a300-9cef56c5e6d4","resourceVersion":"1786","creationTimestamp":"2024-09-23T13:30:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_30_01_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:30:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4304 chars]
	I0923 13:34:21.403118    7084 pod_ready.go:98] node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:34:21.403197    7084 pod_ready.go:82] duration metric: took 312.6355ms for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.403197    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:34:21.403197    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.598657    7084 request.go:632] Waited for 195.294ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:34:21.598657    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:34:21.598657    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.598657    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.598657    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.612984    7084 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 13:34:21.612984    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.612984    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.612984    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.612984    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.612984    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.612984    7084 round_trippers.go:580]     Audit-Id: 863a0912-361d-47e7-92e9-836ccee225ab
	I0923 13:34:21.612984    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.612984    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"1686","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I0923 13:34:21.799545    7084 request.go:632] Waited for 185.544ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:34:21.799545    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:34:21.799545    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.799545    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.799545    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.803275    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:21.803371    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.803371    7084 round_trippers.go:580]     Audit-Id: 9301a31b-7d6f-4384-b41a-c9f99186cd04
	I0923 13:34:21.803371    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.803371    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.803371    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.803371    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.803371    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:21.803958    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"1683","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4486 chars]
	I0923 13:34:21.804814    7084 pod_ready.go:98] node "multinode-560300-m02" hosting pod "kube-proxy-g5t97" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m02" has status "Ready":"Unknown"
	I0923 13:34:21.804893    7084 pod_ready.go:82] duration metric: took 401.5992ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.804893    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m02" hosting pod "kube-proxy-g5t97" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m02" has status "Ready":"Unknown"
	I0923 13:34:21.804893    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.998640    7084 request.go:632] Waited for 193.507ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:34:21.999165    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:34:21.999165    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.999165    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.999165    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.003139    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:22.003139    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.003139    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.003139    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.003139    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.003139    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:22.003139    7084 round_trippers.go:580]     Audit-Id: 18b23371-9762-40c5-9781-12dcc6fc34db
	I0923 13:34:22.003139    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.003290    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"1800","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0923 13:34:22.198360    7084 request.go:632] Waited for 194.3584ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.198360    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.198360    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:22.198360    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:22.198360    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.203104    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:22.203104    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.203104    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.203104    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.203104    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.203104    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.203104    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:22.203104    7084 round_trippers.go:580]     Audit-Id: b97249ec-a57e-43a1-9c1f-671676d3c95e
	I0923 13:34:22.203371    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:22.203912    7084 pod_ready.go:98] node "multinode-560300" hosting pod "kube-proxy-rgmcw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:22.203912    7084 pod_ready.go:82] duration metric: took 398.8778ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:22.203912    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "kube-proxy-rgmcw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:22.203975    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:22.398838    7084 request.go:632] Waited for 194.8496ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:34:22.398838    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:34:22.398838    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:22.398838    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:22.398838    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.402614    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:22.402684    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.402684    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.402746    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:22.402746    7084 round_trippers.go:580]     Audit-Id: 08e5c977-b589-466b-9f08-49f76c5594c2
	I0923 13:34:22.402746    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.402804    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.402804    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.403119    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"1747","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0923 13:34:22.598233    7084 request.go:632] Waited for 194.3677ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.598822    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.598822    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:22.598822    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:22.598822    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.602202    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:22.602202    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.602202    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:22.602202    7084 round_trippers.go:580]     Audit-Id: f31ec490-6978-40af-a299-301a0b633e09
	I0923 13:34:22.602202    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.602202    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.602202    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.602202    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.602202    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:22.602806    7084 pod_ready.go:98] node "multinode-560300" hosting pod "kube-scheduler-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:22.602806    7084 pod_ready.go:82] duration metric: took 398.8039ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:22.602806    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "kube-scheduler-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:22.602806    7084 pod_ready.go:39] duration metric: took 1.5813021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:34:22.602806    7084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:34:22.623892    7084 command_runner.go:130] > -16
	I0923 13:34:22.623892    7084 ops.go:34] apiserver oom_adj: -16
	I0923 13:34:22.623892    7084 kubeadm.go:597] duration metric: took 12.8571878s to restartPrimaryControlPlane
	I0923 13:34:22.623892    7084 kubeadm.go:394] duration metric: took 12.9165376s to StartCluster
	I0923 13:34:22.623892    7084 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:22.623892    7084 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:34:22.626920    7084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:22.628165    7084 start.go:235] Will wait 6m0s for node &{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 13:34:22.628165    7084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 13:34:22.628825    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:34:22.633232    7084 out.go:177] * Enabled addons: 
	I0923 13:34:22.636903    7084 addons.go:510] duration metric: took 8.7368ms for enable addons: enabled=[]
	I0923 13:34:22.638851    7084 out.go:177] * Verifying Kubernetes components...
	I0923 13:34:22.651134    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:22.907023    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:34:22.942263    7084 node_ready.go:35] waiting up to 6m0s for node "multinode-560300" to be "Ready" ...
	I0923 13:34:22.942263    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.942263    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:22.942263    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:22.942263    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.946054    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:22.946054    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.946054    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.946132    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.946132    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.946132    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:23 GMT
	I0923 13:34:22.946132    7084 round_trippers.go:580]     Audit-Id: d85e60c0-8d36-41e1-965b-3b4bec1c420a
	I0923 13:34:22.946132    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.946349    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:23.442774    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:23.442774    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:23.442774    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:23.442774    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:23.447025    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:23.447025    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:23.447025    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:23.447025    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:23.447025    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:23.447025    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:23.447025    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:23 GMT
	I0923 13:34:23.447025    7084 round_trippers.go:580]     Audit-Id: 0ee239fa-384f-43d6-a803-9ad00153f5dc
	I0923 13:34:23.447509    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:23.942563    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:23.942563    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:23.942563    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:23.942563    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:23.946911    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:23.947005    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:23.947005    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:23.947005    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:23.947005    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:23.947005    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:24 GMT
	I0923 13:34:23.947005    7084 round_trippers.go:580]     Audit-Id: 15730503-fe0a-4d9a-b157-656b91aaa93c
	I0923 13:34:23.947005    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:23.947472    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:24.442802    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:24.442802    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:24.442802    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:24.442802    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:24.447524    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:24.447524    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:24.447648    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:24.447648    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:24.447648    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:24.447648    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:24.447648    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:24 GMT
	I0923 13:34:24.447648    7084 round_trippers.go:580]     Audit-Id: accdb84c-111d-4749-99c9-48a060b5841f
	I0923 13:34:24.448083    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:24.942658    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:24.942658    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:24.942658    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:24.942658    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:24.946965    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:24.947587    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:24.947587    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:24.947587    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:25 GMT
	I0923 13:34:24.947587    7084 round_trippers.go:580]     Audit-Id: 1d2d5195-09ac-4560-abd2-0f1413ac714a
	I0923 13:34:24.947587    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:24.947691    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:24.947691    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:24.948462    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:24.949162    7084 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:34:25.443557    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:25.443557    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:25.443557    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:25.443557    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:25.447086    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:25.447086    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:25.447086    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:25.447610    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:25.447610    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:25 GMT
	I0923 13:34:25.447610    7084 round_trippers.go:580]     Audit-Id: 98bf2518-e6ac-4434-8b0a-bc07f9a3f0c2
	I0923 13:34:25.447610    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:25.447610    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:25.448557    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:25.942977    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:25.942977    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:25.942977    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:25.942977    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:25.947683    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:25.947778    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:25.947778    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:25.947778    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:25.947778    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:25.947914    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:26 GMT
	I0923 13:34:25.947914    7084 round_trippers.go:580]     Audit-Id: 90577b39-0071-4325-8e08-df51e971b616
	I0923 13:34:25.947914    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:25.948147    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:26.443384    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:26.443384    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:26.443384    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:26.443384    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:26.447028    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:26.447028    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:26.447129    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:26.447129    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:26 GMT
	I0923 13:34:26.447129    7084 round_trippers.go:580]     Audit-Id: d58d6db9-8749-4b06-8d1c-7bbdad7daa27
	I0923 13:34:26.447129    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:26.447129    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:26.447129    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:26.447457    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:26.943065    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:26.943065    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:26.943065    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:26.943065    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:26.952535    7084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 13:34:26.952593    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:26.952627    7084 round_trippers.go:580]     Audit-Id: 9d5b0777-e5df-4118-a57a-80bd877dde2f
	I0923 13:34:26.952649    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:26.952649    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:26.952649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:26.952649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:26.952649    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:27 GMT
	I0923 13:34:26.952649    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:26.953240    7084 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:34:27.443233    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:27.443233    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:27.443233    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:27.443233    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:27.446998    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:27.446998    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:27.446998    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:27.446998    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:27.446998    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:27 GMT
	I0923 13:34:27.446998    7084 round_trippers.go:580]     Audit-Id: a9fc73d9-f836-4d24-9ae4-9c06385e6563
	I0923 13:34:27.446998    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:27.446998    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:27.448440    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:27.944598    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:27.944598    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:27.944598    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:27.944598    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:27.948027    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:27.948027    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:27.948027    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:27.948027    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:27.948027    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:27.948027    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:27.948027    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:28 GMT
	I0923 13:34:27.948027    7084 round_trippers.go:580]     Audit-Id: 470a7736-33c7-4c45-9dea-9fd2138a7b85
	I0923 13:34:27.948241    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:28.442833    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:28.442833    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:28.442833    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:28.442833    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:28.447972    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:28.448064    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:28.448064    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:28.448064    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:28.448064    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:28.448064    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:28.448064    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:28 GMT
	I0923 13:34:28.448064    7084 round_trippers.go:580]     Audit-Id: 34a3aba2-a03d-47ef-a2bd-a86ae2838dd6
	I0923 13:34:28.448376    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:28.943461    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:28.943461    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:28.943461    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:28.943461    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:28.947533    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:28.947533    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:28.947533    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:28.947533    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:28.947533    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:29 GMT
	I0923 13:34:28.947533    7084 round_trippers.go:580]     Audit-Id: f5674216-47e4-417b-ae18-47041394862d
	I0923 13:34:28.947533    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:28.947533    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:28.947533    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:29.443825    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:29.443825    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:29.443825    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:29.443825    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:29.448097    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:29.448226    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:29.448226    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:29.448226    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:29.448226    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:29.448226    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:29 GMT
	I0923 13:34:29.448226    7084 round_trippers.go:580]     Audit-Id: eb5ecd94-052e-428c-b0f5-cdbb0b9a5c35
	I0923 13:34:29.448226    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:29.448549    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:29.448785    7084 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:34:29.943198    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:29.943198    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:29.943198    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:29.943198    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:29.947767    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:29.947767    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:29.947767    7084 round_trippers.go:580]     Audit-Id: 9293a795-157e-4780-8f3b-d88ff972393a
	I0923 13:34:29.947767    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:29.947767    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:29.947767    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:29.947767    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:29.947767    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:30 GMT
	I0923 13:34:29.948651    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:30.444014    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:30.444115    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:30.444115    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:30.444115    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:30.450186    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:34:30.450312    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:30.450312    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:30.450312    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:30 GMT
	I0923 13:34:30.450312    7084 round_trippers.go:580]     Audit-Id: 470e870b-8ce1-43ca-a23b-782b830ef6cc
	I0923 13:34:30.450312    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:30.450312    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:30.450312    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:30.450469    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:30.943172    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:30.943172    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:30.943172    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:30.943172    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:30.947605    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:30.947605    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:30.947716    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:30.947716    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:30.947716    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:31 GMT
	I0923 13:34:30.947716    7084 round_trippers.go:580]     Audit-Id: 111dba1d-efbe-4966-84e8-d453eda89ca8
	I0923 13:34:30.947716    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:30.947716    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:30.947916    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:31.444359    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:31.444359    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:31.444359    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:31.444359    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:31.448942    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:31.448942    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:31.448942    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:31 GMT
	I0923 13:34:31.448942    7084 round_trippers.go:580]     Audit-Id: 71d5fcd9-00a1-4186-b536-24837ab848a1
	I0923 13:34:31.449041    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:31.449041    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:31.449041    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:31.449041    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:31.449419    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:31.450192    7084 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:34:31.944391    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:31.944470    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:31.944470    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:31.944470    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:31.947873    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:31.947873    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:31.947873    7084 round_trippers.go:580]     Audit-Id: 7ef5da8e-de20-4172-86d9-ae8b0f001440
	I0923 13:34:31.948079    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:31.948079    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:31.948079    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:31.948079    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:31.948079    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:32 GMT
	I0923 13:34:31.948460    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:32.443390    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:32.443390    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.443390    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.443390    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.446851    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:32.447762    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.447762    7084 round_trippers.go:580]     Audit-Id: e718816a-a853-4faf-98b7-30da9ce7c07d
	I0923 13:34:32.447762    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.447762    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.447762    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.447762    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.447762    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:32 GMT
	I0923 13:34:32.448052    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:32.943502    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:32.943502    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.943502    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.943502    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.947220    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:32.947665    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.947665    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.947665    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:32.947665    7084 round_trippers.go:580]     Audit-Id: e4d7fd9f-0234-4a76-9dec-f09e95a44a01
	I0923 13:34:32.947665    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.947665    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.947740    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.947896    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:32.948600    7084 node_ready.go:49] node "multinode-560300" has status "Ready":"True"
	I0923 13:34:32.948658    7084 node_ready.go:38] duration metric: took 10.0057193s for node "multinode-560300" to be "Ready" ...
	I0923 13:34:32.948714    7084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:34:32.948889    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:32.948889    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.948889    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.948889    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.954646    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:32.954670    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.954670    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.954670    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:32.954670    7084 round_trippers.go:580]     Audit-Id: 27a9c888-409d-4ff5-b3e6-31dd39a04cf5
	I0923 13:34:32.954670    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.954670    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.954839    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.956152    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1829"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90024 chars]
	I0923 13:34:32.960650    7084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:32.960650    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:32.960650    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.960650    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.960650    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.963644    7084 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 13:34:32.963751    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.963751    7084 round_trippers.go:580]     Audit-Id: 767e7ce8-9691-4ef4-86c3-cd45a19c578a
	I0923 13:34:32.963751    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.963751    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.963829    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.963829    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.963829    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:32.964056    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:32.965180    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:32.965246    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.965246    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.965246    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.967781    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:32.967781    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.967862    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.967862    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:32.967862    7084 round_trippers.go:580]     Audit-Id: 236ce0fc-2448-48a6-b021-e5e3564e9b66
	I0923 13:34:32.967862    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.967862    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.967862    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.968235    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:33.461201    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:33.461201    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:33.461201    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:33.461201    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:33.465759    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:33.465759    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:33.465759    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:33.465759    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:33.465759    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:33.465857    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:33.465857    7084 round_trippers.go:580]     Audit-Id: 52bea11d-b25b-4a0f-a046-72e008baa47f
	I0923 13:34:33.465857    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:33.466531    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:33.466797    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:33.466797    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:33.466797    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:33.466797    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:33.469823    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:33.469823    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:33.469823    7084 round_trippers.go:580]     Audit-Id: 23ea692e-0aa1-4ce6-8fdb-78e0ac1e9d6d
	I0923 13:34:33.469919    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:33.469919    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:33.469919    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:33.469919    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:33.469919    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:33.470166    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:33.960919    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:33.960919    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:33.960919    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:33.960919    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:33.964864    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:33.964864    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:33.964864    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:33.964864    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:33.964864    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:33.965050    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:34 GMT
	I0923 13:34:33.965050    7084 round_trippers.go:580]     Audit-Id: 1070b9c7-6d3d-49b3-9d65-0a87575304bd
	I0923 13:34:33.965050    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:33.965111    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:33.966437    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:33.966437    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:33.966514    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:33.966514    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:33.969281    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:33.969281    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:33.969281    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:33.969281    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:33.969281    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:33.969281    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:34 GMT
	I0923 13:34:33.969281    7084 round_trippers.go:580]     Audit-Id: b023b399-c0a1-424b-a44e-0ba91c4b161c
	I0923 13:34:33.969281    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:33.969281    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:34.461544    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:34.461544    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:34.461544    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:34.461544    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:34.466744    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:34.466830    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:34.466830    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:34.466830    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:34.466830    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:34.466830    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:34.466830    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:34 GMT
	I0923 13:34:34.466909    7084 round_trippers.go:580]     Audit-Id: 382d7fef-d697-4903-8245-df8ac560e2d6
	I0923 13:34:34.467148    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:34.468111    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:34.468111    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:34.468111    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:34.468111    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:34.472049    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:34.472117    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:34.472117    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:34.472117    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:34 GMT
	I0923 13:34:34.472117    7084 round_trippers.go:580]     Audit-Id: a58abc47-999b-4840-bcc5-a8dbbbfb0ee0
	I0923 13:34:34.472117    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:34.472177    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:34.472177    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:34.472598    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:34.961200    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:34.961200    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:34.961200    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:34.961200    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:34.965430    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:34.965430    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:34.965519    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:35 GMT
	I0923 13:34:34.965519    7084 round_trippers.go:580]     Audit-Id: 3e2fdbfb-d230-475e-b791-fd097549bf6f
	I0923 13:34:34.965519    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:34.965519    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:34.965519    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:34.965519    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:34.965519    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:34.966357    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:34.966357    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:34.966357    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:34.966357    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:34.973198    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:34:34.973198    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:34.973198    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:34.973198    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:34.973198    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:34.973198    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:35 GMT
	I0923 13:34:34.973198    7084 round_trippers.go:580]     Audit-Id: 47615921-14f9-4a3f-828d-bd2cc417ab7f
	I0923 13:34:34.973198    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:34.973198    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:34.973918    7084 pod_ready.go:103] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"False"
	I0923 13:34:35.461482    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:35.461482    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.461482    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.461482    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.465774    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:35.465834    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.465834    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.465834    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.465834    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.465834    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:35 GMT
	I0923 13:34:35.465879    7084 round_trippers.go:580]     Audit-Id: 86320bcf-6d21-43a6-8b8c-21eb1af63f4f
	I0923 13:34:35.465879    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.466281    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:35.467436    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.467436    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.467436    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.467436    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.474734    7084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 13:34:35.474734    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.474734    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.474734    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.474734    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.474734    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:35 GMT
	I0923 13:34:35.474734    7084 round_trippers.go:580]     Audit-Id: b6f120e0-ae73-4bee-ae15-aa5e6029c5fe
	I0923 13:34:35.474734    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.474734    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.961269    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:35.961269    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.961269    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.961269    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.965651    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:35.965651    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.965651    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.965651    7084 round_trippers.go:580]     Audit-Id: deed854d-755b-4731-8750-c146a513a261
	I0923 13:34:35.965651    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.965651    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.965651    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.965651    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.965862    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0923 13:34:35.966544    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.966618    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.966618    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.966618    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.968759    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.969086    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.969086    7084 round_trippers.go:580]     Audit-Id: cea3089c-74ca-4174-b629-6d3d37be4449
	I0923 13:34:35.969086    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.969086    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.969086    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.969086    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.969086    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.969334    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.970028    7084 pod_ready.go:93] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:35.970080    7084 pod_ready.go:82] duration metric: took 3.0092274s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.970131    7084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.970302    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:34:35.970372    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.970372    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.970589    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.972750    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.973484    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.973484    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.973484    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.973532    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.973532    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.973532    7084 round_trippers.go:580]     Audit-Id: 3edc7c65-c8e1-452b-a627-96548da01d14
	I0923 13:34:35.973532    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.973797    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1822","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0923 13:34:35.974116    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.974116    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.974116    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.974116    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.976691    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.977150    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.977150    7084 round_trippers.go:580]     Audit-Id: 7a85e087-f22c-481c-84bf-c4e8214fb6cb
	I0923 13:34:35.977150    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.977150    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.977199    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.977199    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.977199    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.977429    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.978162    7084 pod_ready.go:93] pod "etcd-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:35.978217    7084 pod_ready.go:82] duration metric: took 7.9736ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.978217    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.978439    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:34:35.978439    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.978439    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.978530    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.983713    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:35.983713    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.983713    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.983713    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.983713    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.983713    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.983713    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.983713    7084 round_trippers.go:580]     Audit-Id: f5bacf51-d3cc-44cf-95c2-9bda12e6d41b
	I0923 13:34:35.983713    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"c88cb5c4-fe30-4354-bf55-1f281cf11190","resourceVersion":"1816","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.156.56:8443","kubernetes.io/config.hash":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.mirror":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.seen":"2024-09-23T13:34:12.942044692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0923 13:34:35.984351    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.984351    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.984351    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.984351    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.987579    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:35.987579    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.987579    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.987579    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.987579    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.987579    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.987579    7084 round_trippers.go:580]     Audit-Id: 130a4489-c829-40f3-9b72-eb4067f8ac64
	I0923 13:34:35.987579    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.987916    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.988320    7084 pod_ready.go:93] pod "kube-apiserver-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:35.988352    7084 pod_ready.go:82] duration metric: took 10.1346ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.988392    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.988466    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:34:35.988498    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.988498    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.988537    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.990653    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.990653    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.990653    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.990653    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.990653    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.990653    7084 round_trippers.go:580]     Audit-Id: 921b7c4d-b6e0-485f-971f-15d865263097
	I0923 13:34:35.990653    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.990653    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.990653    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"1809","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0923 13:34:35.991772    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.991772    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.991829    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.991829    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.994222    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.994269    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.994300    7084 round_trippers.go:580]     Audit-Id: df40959d-91e9-4a5c-8eb6-eb033d775488
	I0923 13:34:35.994300    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.994300    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.994300    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.994300    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.994346    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.994496    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.994816    7084 pod_ready.go:93] pod "kube-controller-manager-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:35.994894    7084 pod_ready.go:82] duration metric: took 6.5012ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.994894    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.994894    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:34:35.994894    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.994894    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.994894    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.997301    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.997334    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.997334    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.997377    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.997377    7084 round_trippers.go:580]     Audit-Id: a30a6599-3295-43a4-b921-75c3c87ff202
	I0923 13:34:35.997377    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.997377    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.997377    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.997587    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dbkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"44a5a18e-0e93-4293-8d4b-13e3ec9acfef","resourceVersion":"1660","creationTimestamp":"2024-09-23T13:20:08Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:20:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0923 13:34:35.997679    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:34:35.997679    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.997679    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.997679    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.000450    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:36.000450    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.000450    7084 round_trippers.go:580]     Audit-Id: e6912442-cbc0-4295-9638-745492c131ab
	I0923 13:34:36.000450    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.000503    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.000503    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.000503    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.000503    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.000618    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"781efd95-4e81-4850-a300-9cef56c5e6d4","resourceVersion":"1786","creationTimestamp":"2024-09-23T13:30:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_30_01_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:30:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4304 chars]
	I0923 13:34:36.000784    7084 pod_ready.go:98] node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:34:36.000784    7084 pod_ready.go:82] duration metric: took 5.8894ms for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:36.000784    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:34:36.000784    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:36.162004    7084 request.go:632] Waited for 161.2088ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:34:36.162004    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:34:36.162004    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.162004    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.162004    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.166500    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:36.166582    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.166659    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.166659    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.166659    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.166659    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.166659    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.166659    7084 round_trippers.go:580]     Audit-Id: ea93f38b-24cc-44de-b9bd-d60128e72fd8
	I0923 13:34:36.166790    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"1686","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I0923 13:34:36.361987    7084 request.go:632] Waited for 194.0221ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:34:36.361987    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:34:36.361987    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.361987    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.361987    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.365703    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:36.365703    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.365793    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.365793    7084 round_trippers.go:580]     Audit-Id: a5e8f6ce-22e0-480d-a066-17f57588fc6f
	I0923 13:34:36.365793    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.365793    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.365793    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.365793    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.366070    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"1683","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4486 chars]
	I0923 13:34:36.366892    7084 pod_ready.go:98] node "multinode-560300-m02" hosting pod "kube-proxy-g5t97" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m02" has status "Ready":"Unknown"
	I0923 13:34:36.366965    7084 pod_ready.go:82] duration metric: took 366.1567ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:36.366965    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m02" hosting pod "kube-proxy-g5t97" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m02" has status "Ready":"Unknown"
	I0923 13:34:36.366965    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:36.561510    7084 request.go:632] Waited for 194.4251ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:34:36.561858    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:34:36.561858    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.561858    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.561858    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.567339    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:36.567339    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.567339    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.567339    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.567339    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.567339    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.567339    7084 round_trippers.go:580]     Audit-Id: e4178d32-33fa-4885-8e58-0c7bdf0fc9cd
	I0923 13:34:36.567339    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.567339    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"1800","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0923 13:34:36.761568    7084 request.go:632] Waited for 192.9062ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:36.761568    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:36.761568    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.761568    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.761568    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.764743    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:36.764743    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.764743    7084 round_trippers.go:580]     Audit-Id: 76e11f9b-f56a-4e7a-b118-b8e6cb9f754f
	I0923 13:34:36.764743    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.764743    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.764743    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.764743    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.764743    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.766459    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:36.766718    7084 pod_ready.go:93] pod "kube-proxy-rgmcw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:36.766718    7084 pod_ready.go:82] duration metric: took 399.7261ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:36.766718    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:36.961791    7084 request.go:632] Waited for 194.4334ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:34:36.962095    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:34:36.962095    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.962095    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.962095    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.965775    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:36.965840    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.965903    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.965903    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.965903    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.965903    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:36.965903    7084 round_trippers.go:580]     Audit-Id: 3a4bd738-5367-41ed-89bd-c94eb0b00a8d
	I0923 13:34:36.965959    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.966093    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"1810","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0923 13:34:37.161958    7084 request.go:632] Waited for 194.9383ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:37.161958    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:37.161958    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.161958    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.161958    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.166108    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:37.166108    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.166108    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.166108    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.166108    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.166108    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.166108    7084 round_trippers.go:580]     Audit-Id: 07692773-9451-4820-bd93-f1b5d8effde2
	I0923 13:34:37.166108    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.166108    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:37.166730    7084 pod_ready.go:93] pod "kube-scheduler-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:37.166730    7084 pod_ready.go:82] duration metric: took 399.4491ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:37.166730    7084 pod_ready.go:39] duration metric: took 4.2177311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:34:37.167317    7084 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:34:37.179486    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:37.200800    7084 command_runner.go:130] > 1960
	I0923 13:34:37.200800    7084 api_server.go:72] duration metric: took 14.5711188s to wait for apiserver process to appear ...
	I0923 13:34:37.200800    7084 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:34:37.200800    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:37.208033    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 200:
	ok
	I0923 13:34:37.208033    7084 round_trippers.go:463] GET https://172.19.156.56:8443/version
	I0923 13:34:37.208033    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.208033    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.208033    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.210777    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:37.210777    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Audit-Id: 113e3b70-7cd0-4af4-9b48-3aff7d2d7ac2
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.210777    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.210777    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Content-Length: 263
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.210777    7084 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 13:34:37.210777    7084 api_server.go:141] control plane version: v1.31.1
	I0923 13:34:37.210777    7084 api_server.go:131] duration metric: took 9.9767ms to wait for apiserver health ...
	I0923 13:34:37.210777    7084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:34:37.361896    7084 request.go:632] Waited for 151.1091ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:37.362271    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:37.362271    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.362271    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.362271    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.367708    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:37.367708    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.367708    7084 round_trippers.go:580]     Audit-Id: f697420b-b93c-4ae0-9ad8-4cbb3a5dbc56
	I0923 13:34:37.367708    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.367708    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.367708    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.367708    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.367708    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.370230    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1848"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89971 chars]
	I0923 13:34:37.376732    7084 system_pods.go:59] 12 kube-system pods found
	I0923 13:34:37.376815    7084 system_pods.go:61] "coredns-7c65d6cfc9-glx94" [f476c8f8-667a-48d4-84f8-4aa15336cea9] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "etcd-multinode-560300" [477ee4f5-e333-4042-97cd-8457f60fd696] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kindnet-mdnmc" [ffaf3266-f3b8-424f-888b-15aff927de53] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kindnet-qg99z" [0f714fff-dd9b-4ba3-b2e9-6e9e18f21ae9] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kindnet-z9mrc" [c9dfa12e-54ef-4d0b-825e-bcbcaa77b5a9] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-apiserver-multinode-560300" [c88cb5c4-fe30-4354-bf55-1f281cf11190] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-controller-manager-multinode-560300" [aa0d358b-19fd-4553-8a34-f772ba945019] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-proxy-dbkdp" [44a5a18e-0e93-4293-8d4b-13e3ec9acfef] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-proxy-g5t97" [49d7601a-bda4-421e-bde7-acc35c157962] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-proxy-rgmcw" [97050e09-6fc3-4e7b-b00e-07eb9332bf15] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-scheduler-multinode-560300" [01e5d6a3-2eb6-4fa4-8607-072724fb2880] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "storage-provisioner" [444d1029-f19d-4fa6-b454-c9c710e6d9b2] Running
	I0923 13:34:37.376815    7084 system_pods.go:74] duration metric: took 166.0265ms to wait for pod list to return data ...
	I0923 13:34:37.376815    7084 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:34:37.562445    7084 request.go:632] Waited for 185.6179ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/default/serviceaccounts
	I0923 13:34:37.562750    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/default/serviceaccounts
	I0923 13:34:37.562750    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.562750    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.562750    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.567131    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:37.567131    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.567131    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.567131    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Content-Length: 262
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Audit-Id: 7d46abf1-032c-40e4-8bdc-d314dbbfbbd0
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.567131    7084 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1848"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6aaed0f9-99f6-4dde-94ff-d8ba898738d6","resourceVersion":"351","creationTimestamp":"2024-09-23T13:12:59Z"}}]}
	I0923 13:34:37.567808    7084 default_sa.go:45] found service account: "default"
	I0923 13:34:37.567900    7084 default_sa.go:55] duration metric: took 191.0728ms for default service account to be created ...
	I0923 13:34:37.567900    7084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:34:37.761462    7084 request.go:632] Waited for 193.4423ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:37.761462    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:37.761462    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.761462    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.761462    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.766564    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:37.766650    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.766650    7084 round_trippers.go:580]     Audit-Id: 497101ae-4737-4658-8da4-0db07c330a7c
	I0923 13:34:37.766705    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.766705    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.766705    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.766705    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.766705    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.768613    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1848"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89971 chars]
	I0923 13:34:37.773732    7084 system_pods.go:86] 12 kube-system pods found
	I0923 13:34:37.773823    7084 system_pods.go:89] "coredns-7c65d6cfc9-glx94" [f476c8f8-667a-48d4-84f8-4aa15336cea9] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "etcd-multinode-560300" [477ee4f5-e333-4042-97cd-8457f60fd696] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kindnet-mdnmc" [ffaf3266-f3b8-424f-888b-15aff927de53] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kindnet-qg99z" [0f714fff-dd9b-4ba3-b2e9-6e9e18f21ae9] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kindnet-z9mrc" [c9dfa12e-54ef-4d0b-825e-bcbcaa77b5a9] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-apiserver-multinode-560300" [c88cb5c4-fe30-4354-bf55-1f281cf11190] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-controller-manager-multinode-560300" [aa0d358b-19fd-4553-8a34-f772ba945019] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-proxy-dbkdp" [44a5a18e-0e93-4293-8d4b-13e3ec9acfef] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-proxy-g5t97" [49d7601a-bda4-421e-bde7-acc35c157962] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-proxy-rgmcw" [97050e09-6fc3-4e7b-b00e-07eb9332bf15] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-scheduler-multinode-560300" [01e5d6a3-2eb6-4fa4-8607-072724fb2880] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "storage-provisioner" [444d1029-f19d-4fa6-b454-c9c710e6d9b2] Running
	I0923 13:34:37.773823    7084 system_pods.go:126] duration metric: took 205.9088ms to wait for k8s-apps to be running ...
	I0923 13:34:37.773823    7084 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:34:37.781033    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:34:37.804835    7084 system_svc.go:56] duration metric: took 31.0102ms WaitForService to wait for kubelet
	I0923 13:34:37.804977    7084 kubeadm.go:582] duration metric: took 15.1752124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:34:37.805006    7084 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:34:37.961594    7084 request.go:632] Waited for 156.4601ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes
	I0923 13:34:37.961594    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes
	I0923 13:34:37.961594    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.961594    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.961594    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.966164    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:37.966223    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.966223    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.966223    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.966223    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:38 GMT
	I0923 13:34:37.966223    7084 round_trippers.go:580]     Audit-Id: f8479b5e-2545-402a-8deb-5fac0f417e3f
	I0923 13:34:37.966223    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.966223    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.966223    7084 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1848"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16065 chars]
	I0923 13:34:37.968192    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:37.968321    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:37.968321    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:37.968321    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:37.968321    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:37.968321    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:37.968321    7084 node_conditions.go:105] duration metric: took 163.3032ms to run NodePressure ...
	I0923 13:34:37.968436    7084 start.go:241] waiting for startup goroutines ...
	I0923 13:34:37.968436    7084 start.go:246] waiting for cluster config update ...
	I0923 13:34:37.968436    7084 start.go:255] writing updated cluster config ...
	I0923 13:34:37.972041    7084 out.go:201] 
	I0923 13:34:37.975251    7084 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:34:37.985805    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:34:37.985938    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:34:37.990678    7084 out.go:177] * Starting "multinode-560300-m02" worker node in "multinode-560300" cluster
	I0923 13:34:37.993031    7084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:34:37.994029    7084 cache.go:56] Caching tarball of preloaded images
	I0923 13:34:37.994185    7084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 13:34:37.994185    7084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:34:37.994185    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:34:37.995400    7084 start.go:360] acquireMachinesLock for multinode-560300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:34:37.996381    7084 start.go:364] duration metric: took 981µs to acquireMachinesLock for "multinode-560300-m02"
	I0923 13:34:37.996381    7084 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:34:37.996381    7084 fix.go:54] fixHost starting: m02
	I0923 13:34:37.996983    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:39.809217    7084 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 13:34:39.809685    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:39.809685    7084 fix.go:112] recreateIfNeeded on multinode-560300-m02: state=Stopped err=<nil>
	W0923 13:34:39.809685    7084 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:34:39.813166    7084 out.go:177] * Restarting existing hyperv VM for "multinode-560300-m02" ...
	I0923 13:34:39.815442    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-560300-m02
	I0923 13:34:42.542544    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:34:42.542544    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:42.542899    7084 main.go:141] libmachine: Waiting for host to start...
	I0923 13:34:42.542899    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:44.505211    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:34:44.505211    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:44.505211    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:34:46.706131    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:34:46.706131    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:47.706556    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:49.637596    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:34:49.638305    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:49.638305    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:34:51.828815    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:34:51.828815    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:52.829244    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:54.726159    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:34:54.726619    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:54.726619    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:34:56.894069    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:34:56.894069    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:57.894477    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:59.812955    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:34:59.812955    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:59.813356    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:02.053652    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:35:02.053942    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:03.055265    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:05.029320    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:05.029320    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:05.029320    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:07.459752    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:07.460126    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:07.463492    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:09.370891    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:09.370891    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:09.371826    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:11.681787    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:11.681973    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:11.681973    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:35:11.684088    7084 machine.go:93] provisionDockerMachine start ...
	I0923 13:35:11.684153    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:13.619849    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:13.620050    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:13.620050    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:15.918661    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:15.918661    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:15.922865    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:15.922865    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:15.922865    7084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:35:16.056732    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:35:16.056732    7084 buildroot.go:166] provisioning hostname "multinode-560300-m02"
	I0923 13:35:16.057269    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:17.980566    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:17.980566    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:17.980566    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:20.295878    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:20.295878    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:20.299670    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:20.300313    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:20.300313    7084 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-560300-m02 && echo "multinode-560300-m02" | sudo tee /etc/hostname
	I0923 13:35:20.466164    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-560300-m02
	
	I0923 13:35:20.466707    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:22.389514    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:22.389514    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:22.389514    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:24.649347    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:24.649347    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:24.653957    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:24.653957    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:24.653957    7084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-560300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-560300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-560300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:35:24.814634    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:35:24.814634    7084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 13:35:24.814634    7084 buildroot.go:174] setting up certificates
	I0923 13:35:24.814634    7084 provision.go:84] configureAuth start
	I0923 13:35:24.814634    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:26.733306    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:26.733306    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:26.733306    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:29.020658    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:29.020658    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:29.020658    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:30.930455    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:30.930455    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:30.931247    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:33.153362    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:33.154229    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:33.154229    7084 provision.go:143] copyHostCerts
	I0923 13:35:33.154439    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 13:35:33.154661    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 13:35:33.154661    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 13:35:33.155063    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 13:35:33.156143    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 13:35:33.156437    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 13:35:33.156516    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 13:35:33.157017    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 13:35:33.158217    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 13:35:33.158249    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 13:35:33.158249    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 13:35:33.158249    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 13:35:33.159639    7084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-560300-m02 san=[127.0.0.1 172.19.147.0 localhost minikube multinode-560300-m02]
	I0923 13:35:33.295795    7084 provision.go:177] copyRemoteCerts
	I0923 13:35:33.304719    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:35:33.305314    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:35.148187    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:35.148187    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:35.148446    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:37.377806    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:37.377806    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:37.378933    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:35:37.483765    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.178157s)
	I0923 13:35:37.483838    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 13:35:37.483838    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:35:37.524211    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 13:35:37.524475    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0923 13:35:37.563616    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 13:35:37.564209    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:35:37.605585    7084 provision.go:87] duration metric: took 12.7900878s to configureAuth
	I0923 13:35:37.605680    7084 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:35:37.606305    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:35:37.606414    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:39.460470    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:39.461012    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:39.461079    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:41.649608    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:41.649608    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:41.653807    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:41.654156    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:41.654156    7084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:35:41.795099    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 13:35:41.795099    7084 buildroot.go:70] root file system type: tmpfs
	I0923 13:35:41.795329    7084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:35:41.795329    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:43.630208    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:43.630208    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:43.630208    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:45.830017    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:45.830017    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:45.834095    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:45.834192    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:45.834192    7084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.156.56"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:35:46.013447    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.156.56
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:35:46.013579    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:47.892547    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:47.893564    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:47.893750    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:50.137028    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:50.137028    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:50.141166    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:50.141773    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:50.141773    7084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:35:52.444305    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 13:35:52.444357    7084 machine.go:96] duration metric: took 40.7575175s to provisionDockerMachine
	I0923 13:35:52.444424    7084 start.go:293] postStartSetup for "multinode-560300-m02" (driver="hyperv")
	I0923 13:35:52.444480    7084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:35:52.455831    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:35:52.455831    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:54.295962    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:54.295962    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:54.296542    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:56.560763    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:56.560763    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:56.561417    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:35:56.675255    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.21914s)
	I0923 13:35:56.684060    7084 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:35:56.693981    7084 command_runner.go:130] > NAME=Buildroot
	I0923 13:35:56.694652    7084 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:35:56.694688    7084 command_runner.go:130] > ID=buildroot
	I0923 13:35:56.694688    7084 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:35:56.694688    7084 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:35:56.694942    7084 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:35:56.695009    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 13:35:56.695009    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 13:35:56.695009    7084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 13:35:56.695009    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 13:35:56.704792    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:35:56.720657    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 13:35:56.764536    7084 start.go:296] duration metric: took 4.3198208s for postStartSetup
	I0923 13:35:56.764536    7084 fix.go:56] duration metric: took 1m18.7628379s for fixHost
	I0923 13:35:56.764536    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:58.599988    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:58.599988    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:58.600063    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:00.780434    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:00.780434    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:00.784402    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:36:00.784774    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:36:00.784847    7084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:36:00.933863    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098561.142260224
	
	I0923 13:36:00.933958    7084 fix.go:216] guest clock: 1727098561.142260224
	I0923 13:36:00.933958    7084 fix.go:229] Guest: 2024-09-23 13:36:01.142260224 +0000 UTC Remote: 2024-09-23 13:35:56.7645364 +0000 UTC m=+215.749594001 (delta=4.377723824s)
	I0923 13:36:00.933958    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:02.788774    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:02.788774    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:02.788845    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:05.024843    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:05.025710    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:05.029525    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:36:05.029925    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:36:05.029999    7084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727098560
	I0923 13:36:05.177960    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 13:36:00 UTC 2024
	
	I0923 13:36:05.177960    7084 fix.go:236] clock set: Mon Sep 23 13:36:00 UTC 2024
	 (err=<nil>)
	I0923 13:36:05.177960    7084 start.go:83] releasing machines lock for "multinode-560300-m02", held for 1m27.1756945s
	I0923 13:36:05.177960    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:07.034702    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:07.034702    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:07.034702    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:09.311777    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:09.311777    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:09.314188    7084 out.go:177] * Found network options:
	I0923 13:36:09.316740    7084 out.go:177]   - NO_PROXY=172.19.156.56
	W0923 13:36:09.319110    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:36:09.321118    7084 out.go:177]   - NO_PROXY=172.19.156.56
	W0923 13:36:09.324063    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:36:09.325562    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:36:09.327996    7084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 13:36:09.327996    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:09.335604    7084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:36:09.336598    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:13.592675    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:13.593498    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:13.593498    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:36:13.611178    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:13.611532    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:13.611804    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:36:13.685628    7084 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 13:36:13.685724    7084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3574343s)
	W0923 13:36:13.685724    7084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 13:36:13.718008    7084 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0923 13:36:13.718008    7084 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3821085s)
	W0923 13:36:13.718008    7084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:36:13.727942    7084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:36:13.760346    7084 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 13:36:13.760346    7084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:36:13.760346    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:36:13.760346    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0923 13:36:13.777784    7084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 13:36:13.777784    7084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 13:36:13.796877    7084 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 13:36:13.805712    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 13:36:13.833530    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 13:36:13.852941    7084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:36:13.861736    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:36:13.891622    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:36:13.919313    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:36:13.946089    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:36:13.975060    7084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:36:14.003428    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:36:14.031135    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:36:14.059868    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:36:14.087652    7084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:36:14.103302    7084 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:36:14.103302    7084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:36:14.112125    7084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:36:14.141573    7084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:36:14.174152    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:14.348528    7084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:36:14.378053    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:36:14.391873    7084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:36:14.414349    7084 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 13:36:14.414445    7084 command_runner.go:130] > [Unit]
	I0923 13:36:14.414445    7084 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 13:36:14.414445    7084 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 13:36:14.414445    7084 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 13:36:14.414445    7084 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 13:36:14.414445    7084 command_runner.go:130] > StartLimitBurst=3
	I0923 13:36:14.414445    7084 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 13:36:14.414445    7084 command_runner.go:130] > [Service]
	I0923 13:36:14.414445    7084 command_runner.go:130] > Type=notify
	I0923 13:36:14.414445    7084 command_runner.go:130] > Restart=on-failure
	I0923 13:36:14.414992    7084 command_runner.go:130] > Environment=NO_PROXY=172.19.156.56
	I0923 13:36:14.414992    7084 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 13:36:14.415158    7084 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 13:36:14.415158    7084 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 13:36:14.415158    7084 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 13:36:14.415158    7084 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 13:36:14.415158    7084 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 13:36:14.415158    7084 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 13:36:14.415158    7084 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 13:36:14.415158    7084 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 13:36:14.415158    7084 command_runner.go:130] > ExecStart=
	I0923 13:36:14.415158    7084 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 13:36:14.415158    7084 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 13:36:14.415158    7084 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 13:36:14.415698    7084 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 13:36:14.415698    7084 command_runner.go:130] > LimitNOFILE=infinity
	I0923 13:36:14.415698    7084 command_runner.go:130] > LimitNPROC=infinity
	I0923 13:36:14.415698    7084 command_runner.go:130] > LimitCORE=infinity
	I0923 13:36:14.415776    7084 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 13:36:14.416098    7084 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 13:36:14.416098    7084 command_runner.go:130] > TasksMax=infinity
	I0923 13:36:14.416098    7084 command_runner.go:130] > TimeoutStartSec=0
	I0923 13:36:14.416098    7084 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 13:36:14.416098    7084 command_runner.go:130] > Delegate=yes
	I0923 13:36:14.416098    7084 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 13:36:14.416098    7084 command_runner.go:130] > KillMode=process
	I0923 13:36:14.416098    7084 command_runner.go:130] > [Install]
	I0923 13:36:14.416098    7084 command_runner.go:130] > WantedBy=multi-user.target
	I0923 13:36:14.425001    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:36:14.451304    7084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:36:14.488332    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:36:14.520359    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:36:14.551137    7084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:36:14.612117    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:36:14.634311    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:36:14.664435    7084 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 13:36:14.674725    7084 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:36:14.680730    7084 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 13:36:14.687724    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:36:14.704598    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:36:14.747294    7084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:36:14.919247    7084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:36:15.088871    7084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:36:15.088999    7084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:36:15.131899    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:15.309103    7084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:36:17.930753    7084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6214727s)
	I0923 13:36:17.945404    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:36:17.979136    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:36:18.012751    7084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:36:18.204263    7084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:36:18.405143    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:18.599304    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:36:18.639727    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:36:18.671787    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:18.855165    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:36:18.964412    7084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:36:18.974388    7084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:36:18.983388    7084 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 13:36:18.983388    7084 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:36:18.983388    7084 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0923 13:36:18.983388    7084 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 13:36:18.983388    7084 command_runner.go:130] > Access: 2024-09-23 13:36:19.092396564 +0000
	I0923 13:36:18.983388    7084 command_runner.go:130] > Modify: 2024-09-23 13:36:19.092396564 +0000
	I0923 13:36:18.983388    7084 command_runner.go:130] > Change: 2024-09-23 13:36:19.095396707 +0000
	I0923 13:36:18.983388    7084 command_runner.go:130] >  Birth: -
	I0923 13:36:18.983388    7084 start.go:563] Will wait 60s for crictl version
	I0923 13:36:18.992392    7084 ssh_runner.go:195] Run: which crictl
	I0923 13:36:18.998484    7084 command_runner.go:130] > /usr/bin/crictl
	I0923 13:36:19.006926    7084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:36:19.062185    7084 command_runner.go:130] > Version:  0.1.0
	I0923 13:36:19.062185    7084 command_runner.go:130] > RuntimeName:  docker
	I0923 13:36:19.062185    7084 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 13:36:19.062307    7084 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:36:19.062307    7084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:36:19.073211    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:36:19.106821    7084 command_runner.go:130] > 27.3.0
	I0923 13:36:19.115070    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:36:19.142496    7084 command_runner.go:130] > 27.3.0
	I0923 13:36:19.145526    7084 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:36:19.148988    7084 out.go:177]   - env NO_PROXY=172.19.156.56
	I0923 13:36:19.151055    7084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 13:36:19.154681    7084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 13:36:19.154681    7084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 13:36:19.154681    7084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 13:36:19.154681    7084 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 13:36:19.156973    7084 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 13:36:19.156973    7084 ip.go:214] interface addr: 172.19.144.1/20
	I0923 13:36:19.165053    7084 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 13:36:19.171158    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:36:19.191364    7084 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:36:19.191978    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:36:19.192503    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:36:21.049563    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:21.049563    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:21.049563    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:36:21.050315    7084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300 for IP: 172.19.147.0
	I0923 13:36:21.050315    7084 certs.go:194] generating shared ca certs ...
	I0923 13:36:21.050415    7084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:36:21.050822    7084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 13:36:21.051139    7084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 13:36:21.051256    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:36:21.051469    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:36:21.051561    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:36:21.051758    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:36:21.052050    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 13:36:21.052254    7084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 13:36:21.052356    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 13:36:21.052550    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 13:36:21.052748    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 13:36:21.053043    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 13:36:21.053375    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 13:36:21.053537    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.053630    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.053729    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.053917    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:36:21.104837    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 13:36:21.152343    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:36:21.197502    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:36:21.240643    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:36:21.286631    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 13:36:21.336928    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 13:36:21.389431    7084 ssh_runner.go:195] Run: openssl version
	I0923 13:36:21.398222    7084 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:36:21.407147    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 13:36:21.434123    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.440442    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.440442    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.448873    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.456402    7084 command_runner.go:130] > 3ec20f2e
	I0923 13:36:21.465009    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:36:21.491192    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:36:21.520176    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.529893    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.529893    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.538236    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.546144    7084 command_runner.go:130] > b5213941
	I0923 13:36:21.553889    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:36:21.581219    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 13:36:21.607943    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.614438    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.614438    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.622163    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.633943    7084 command_runner.go:130] > 51391683
	I0923 13:36:21.647027    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 13:36:21.676342    7084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:36:21.683019    7084 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:36:21.683112    7084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:36:21.683300    7084 kubeadm.go:934] updating node {m02 172.19.147.0 8443 v1.31.1 docker false true} ...
	I0923 13:36:21.683516    7084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-560300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.147.0
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:36:21.691313    7084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:36:21.707891    7084 command_runner.go:130] > kubeadm
	I0923 13:36:21.707891    7084 command_runner.go:130] > kubectl
	I0923 13:36:21.707891    7084 command_runner.go:130] > kubelet
	I0923 13:36:21.707891    7084 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:36:21.716283    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0923 13:36:21.732905    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0923 13:36:21.760833    7084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:36:21.797865    7084 ssh_runner.go:195] Run: grep 172.19.156.56	control-plane.minikube.internal$ /etc/hosts
	I0923 13:36:21.803914    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.156.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:36:21.834957    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:22.024617    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:36:22.053071    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:36:22.053706    7084 start.go:317] joinCluster: &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:36:22.053882    7084 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:36:22.053950    7084 host.go:66] Checking if "multinode-560300-m02" exists ...
	I0923 13:36:22.054448    7084 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:36:22.054840    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:36:22.055599    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:36:23.962100    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:23.962100    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:23.962100    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:36:23.962727    7084 api_server.go:166] Checking apiserver status ...
	I0923 13:36:23.971561    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:36:23.971561    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:36:25.907789    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:25.908069    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:25.908069    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:28.196478    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:36:28.197271    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:28.197502    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:36:28.317786    7084 command_runner.go:130] > 1960
	I0923 13:36:28.317871    7084 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.3460173s)
	I0923 13:36:28.329761    7084 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup
	W0923 13:36:28.347670    7084 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:36:28.357817    7084 ssh_runner.go:195] Run: ls
	I0923 13:36:28.364704    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:36:28.372709    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 200:
	ok
	I0923 13:36:28.380991    7084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-560300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0923 13:36:28.544540    7084 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-qg99z, kube-system/kube-proxy-g5t97
	I0923 13:36:31.577230    7084 command_runner.go:130] > node/multinode-560300-m02 cordoned
	I0923 13:36:31.577230    7084 command_runner.go:130] > pod "busybox-7dff88458-h4tgf" has DeletionTimestamp older than 1 seconds, skipping
	I0923 13:36:31.577230    7084 command_runner.go:130] > node/multinode-560300-m02 drained
	I0923 13:36:31.577230    7084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-560300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1960231s)
	I0923 13:36:31.577230    7084 node.go:128] successfully drained node "multinode-560300-m02"
	I0923 13:36:31.577230    7084 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0923 13:36:31.577230    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:33.478159    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:33.478969    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:33.478969    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:35.776912    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:35.776912    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:35.777244    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:36:36.203651    7084 command_runner.go:130] ! W0923 13:36:36.414712    1626 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0923 13:36:36.387949    7084 command_runner.go:130] ! W0923 13:36:36.599117    1626 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod 702a0be4f578ab523bfb36ecbcabeeaa4f27321a3db902ef507a9b1288a59f98: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-7dff88458-h4tgf_default" network: cni config uninitialized
	I0923 13:36:36.404945    7084 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:36:36.404945    7084 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0923 13:36:36.405669    7084 command_runner.go:130] > [reset] Stopping the kubelet service
	I0923 13:36:36.405669    7084 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0923 13:36:36.405669    7084 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0923 13:36:36.405669    7084 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0923 13:36:36.405761    7084 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0923 13:36:36.405761    7084 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0923 13:36:36.405761    7084 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0923 13:36:36.405761    7084 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0923 13:36:36.405761    7084 command_runner.go:130] > to reset your system's IPVS tables.
	I0923 13:36:36.405761    7084 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0923 13:36:36.405761    7084 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0923 13:36:36.405761    7084 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.8282054s)
	I0923 13:36:36.405761    7084 node.go:155] successfully reset node "multinode-560300-m02"
	I0923 13:36:36.406567    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:36:36.407168    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:36:36.408382    7084 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 13:36:36.408382    7084 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0923 13:36:36.408382    7084 round_trippers.go:463] DELETE https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:36.408382    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:36.408382    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:36.408382    7084 round_trippers.go:473]     Content-Type: application/json
	I0923 13:36:36.408382    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:36.426908    7084 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0923 13:36:36.426908    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Audit-Id: dd630e74-552d-4ffd-92b7-e03407fd930b
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:36.426908    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:36.426908    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Content-Length: 171
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:36 GMT
	I0923 13:36:36.426908    7084 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-560300-m02","kind":"nodes","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d"}}
	I0923 13:36:36.426908    7084 node.go:180] successfully deleted node "multinode-560300-m02"
	I0923 13:36:36.426908    7084 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:36:36.426908    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 13:36:36.426908    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:36:38.258534    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:38.258534    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:38.258854    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:40.468700    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:36:40.468700    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:40.469544    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:36:40.640050    7084 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 70r3de.qyuhbp2j0cw96rtj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 13:36:40.640050    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2128576s)
	I0923 13:36:40.640376    7084 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:36:40.640376    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 70r3de.qyuhbp2j0cw96rtj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m02"
	I0923 13:36:40.705551    7084 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:36:40.875996    7084 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0923 13:36:40.876082    7084 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0923 13:36:40.939063    7084 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:36:40.939063    7084 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:36:40.939385    7084 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 13:36:41.142606    7084 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:36:42.144353    7084 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001330567s
	I0923 13:36:42.144493    7084 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0923 13:36:42.178663    7084 command_runner.go:130] > This node has joined the cluster:
	I0923 13:36:42.178743    7084 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0923 13:36:42.178743    7084 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0923 13:36:42.178743    7084 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0923 13:36:42.182137    7084 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:36:42.182254    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 70r3de.qyuhbp2j0cw96rtj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m02": (1.5417002s)
	I0923 13:36:42.182254    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 13:36:42.542705    7084 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0923 13:36:42.556805    7084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-560300-m02 minikube.k8s.io/updated_at=2024_09_23T13_36_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=multinode-560300 minikube.k8s.io/primary=false
	I0923 13:36:42.682325    7084 command_runner.go:130] > node/multinode-560300-m02 labeled
	I0923 13:36:42.682325    7084 start.go:319] duration metric: took 20.6272268s to joinCluster
	I0923 13:36:42.682325    7084 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:36:42.683714    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:36:42.685993    7084 out.go:177] * Verifying Kubernetes components...
	I0923 13:36:42.700468    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:42.919635    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:36:42.951820    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:36:42.952508    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:36:42.953056    7084 node_ready.go:35] waiting up to 6m0s for node "multinode-560300-m02" to be "Ready" ...
	I0923 13:36:42.953056    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:42.953056    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:42.953056    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:42.953056    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:42.956583    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:42.956924    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:42.956924    7084 round_trippers.go:580]     Audit-Id: 0e115d29-08f7-48c4-8646-d4b996ded1be
	I0923 13:36:42.956924    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:42.956924    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:42.957006    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:42.957006    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:42.957006    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:43 GMT
	I0923 13:36:42.957164    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:43.453639    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:43.453639    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:43.453639    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:43.453639    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:43.458112    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:43.458112    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:43.458112    7084 round_trippers.go:580]     Audit-Id: d1c64bbc-36b7-4837-b06b-7ef6dcb16309
	I0923 13:36:43.458112    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:43.458112    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:43.458112    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:43.458112    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:43.458112    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:43 GMT
	I0923 13:36:43.458112    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:43.954233    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:43.954233    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:43.954233    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:43.954233    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:43.957698    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:43.957786    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:43.957786    7084 round_trippers.go:580]     Audit-Id: 1caa83bf-e563-4abe-bd84-41b50a64a2b0
	I0923 13:36:43.957786    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:43.957786    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:43.957786    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:43.957786    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:43.957786    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:44 GMT
	I0923 13:36:43.957944    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:44.453811    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:44.453811    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:44.453811    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:44.453811    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:44.457868    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:44.457868    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:44.457983    7084 round_trippers.go:580]     Audit-Id: 1f456e7e-02f7-4ba6-9fa4-1e24ea21ae0b
	I0923 13:36:44.457983    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:44.457983    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:44.457983    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:44.457983    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:44.457983    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:44 GMT
	I0923 13:36:44.458141    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:44.953590    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:44.953590    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:44.953590    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:44.953590    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:44.957452    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:44.957452    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:44.957452    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:45 GMT
	I0923 13:36:44.957452    7084 round_trippers.go:580]     Audit-Id: 71961eab-5dc1-4f00-91bc-efa8dcbb10ca
	I0923 13:36:44.957452    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:44.957452    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:44.957452    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:44.957452    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:44.957452    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:44.958239    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:45.453311    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:45.453311    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:45.453311    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:45.453311    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:45.457106    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:45.457106    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:45.457106    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:45 GMT
	I0923 13:36:45.457106    7084 round_trippers.go:580]     Audit-Id: 752ffd32-7ecf-48ed-9a1a-00235ff1999a
	I0923 13:36:45.457106    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:45.457106    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:45.457106    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:45.457106    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:45.457319    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:45.954151    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:45.954151    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:45.954151    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:45.954151    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:45.958377    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:45.958377    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:45.958377    7084 round_trippers.go:580]     Audit-Id: 5104288e-110a-42a7-b126-18ea340a3e42
	I0923 13:36:45.958377    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:45.958377    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:45.958377    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:45.958377    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:45.958377    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:46 GMT
	I0923 13:36:45.958569    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:46.453895    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:46.453895    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:46.453895    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:46.453895    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:46.456867    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:46.456867    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:46.456867    7084 round_trippers.go:580]     Audit-Id: 29c24c5d-39a1-462a-a58e-5dffcdb7b97a
	I0923 13:36:46.456941    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:46.456941    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:46.456941    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:46.456941    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:46.456941    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:46 GMT
	I0923 13:36:46.456941    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:46.954589    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:46.954589    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:46.954589    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:46.954589    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:46.958204    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:46.958243    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:46.958243    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:46.958243    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:46.958243    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:46.958243    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:47 GMT
	I0923 13:36:46.958243    7084 round_trippers.go:580]     Audit-Id: d49f5f32-09c8-42df-a4b9-55098feff7bc
	I0923 13:36:46.958243    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:46.958243    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:47.453801    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:47.453801    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:47.453801    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:47.453801    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:47.458361    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:47.458698    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:47.458698    7084 round_trippers.go:580]     Audit-Id: 2e9e482d-4bae-43ca-be38-87bc9379904d
	I0923 13:36:47.458698    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:47.458698    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:47.458698    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:47.458698    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:47.458698    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:47 GMT
	I0923 13:36:47.458895    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:47.459257    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:47.953595    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:47.953595    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:47.953595    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:47.953595    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:47.959065    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:36:47.959065    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:47.959065    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:47.959122    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:47.959122    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:48 GMT
	I0923 13:36:47.959122    7084 round_trippers.go:580]     Audit-Id: 5c36be26-e9b5-461d-b2e2-24c73620f4d0
	I0923 13:36:47.959122    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:47.959122    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:47.959227    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:48.453780    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:48.453780    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:48.453780    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:48.453780    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:48.458381    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:48.458381    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:48.458381    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:48.458381    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:48.458381    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:48.458381    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:48 GMT
	I0923 13:36:48.458381    7084 round_trippers.go:580]     Audit-Id: 1c7bbb79-348a-4158-9192-a070c24fc6cc
	I0923 13:36:48.458381    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:48.458692    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:48.954303    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:48.954303    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:48.954303    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:48.954303    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:48.958320    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:48.958320    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:48.958320    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:48.958320    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:49 GMT
	I0923 13:36:48.958320    7084 round_trippers.go:580]     Audit-Id: 4defb95a-4a0d-45d6-a3c6-d94484cae8dd
	I0923 13:36:48.958320    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:48.958320    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:48.958320    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:48.958320    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:49.453822    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:49.453822    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:49.453822    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:49.453822    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:49.457701    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:49.457784    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:49.457784    7084 round_trippers.go:580]     Audit-Id: 9dc45b75-65b6-40d3-9ba3-6cc378cc0031
	I0923 13:36:49.457784    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:49.457784    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:49.457858    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:49.457858    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:49.457858    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:49 GMT
	I0923 13:36:49.457960    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:49.954478    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:49.954478    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:49.954478    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:49.954478    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:49.958687    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:49.958687    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:49.958687    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:49.958687    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:50 GMT
	I0923 13:36:49.958687    7084 round_trippers.go:580]     Audit-Id: ec51b95a-be03-4a5e-8872-f4e3a6a37ce8
	I0923 13:36:49.958687    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:49.958687    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:49.958687    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:49.958687    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:49.959548    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:50.453911    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:50.453911    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:50.453911    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:50.453911    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:50.457919    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:50.458214    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:50.458275    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:50.458275    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:50.458275    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:50.458275    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:50.458275    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:50 GMT
	I0923 13:36:50.458275    7084 round_trippers.go:580]     Audit-Id: 29b4b7f1-fa89-4ae3-a304-fa67673f026a
	I0923 13:36:50.458412    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:50.955158    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:50.955158    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:50.955158    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:50.955158    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:50.958507    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:50.958583    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:50.958648    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:50.958648    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:50.958672    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:51 GMT
	I0923 13:36:50.958672    7084 round_trippers.go:580]     Audit-Id: ae2e7d6a-f2a9-4ca8-8331-4ceeaa528a7a
	I0923 13:36:50.958672    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:50.958672    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:50.958795    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:51.454346    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:51.454346    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:51.454346    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:51.454346    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:51.458974    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:51.458974    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:51.458974    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:51.458974    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:51.458974    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:51.458974    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:51.458974    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:51 GMT
	I0923 13:36:51.459099    7084 round_trippers.go:580]     Audit-Id: e3c76321-bb07-46a8-9c44-15d3d1eb222f
	I0923 13:36:51.459594    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:51.955218    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:51.955218    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:51.955218    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:51.955218    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:51.957546    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:51.958505    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:51.958505    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:51.958505    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:51.958505    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:52 GMT
	I0923 13:36:51.958505    7084 round_trippers.go:580]     Audit-Id: fa71ed6f-0274-48e0-bb1c-e847d04ed3c8
	I0923 13:36:51.958505    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:51.958505    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:51.958607    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:52.454641    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:52.454641    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:52.454641    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:52.454641    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:52.459781    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:36:52.459895    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:52.459895    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:52 GMT
	I0923 13:36:52.459895    7084 round_trippers.go:580]     Audit-Id: 68cff358-6dc9-49e8-a18b-0c7f581ca79f
	I0923 13:36:52.459895    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:52.459895    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:52.459895    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:52.459895    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:52.460052    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:52.460154    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:52.954540    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:52.954540    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:52.954540    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:52.954540    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:52.958387    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:52.958387    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:52.958387    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:52.958387    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:52.958387    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:53 GMT
	I0923 13:36:52.958387    7084 round_trippers.go:580]     Audit-Id: 4ea2b207-d49f-4ae6-98b7-31819d70f76f
	I0923 13:36:52.958387    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:52.958387    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:52.958917    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:53.454665    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:53.454665    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:53.454665    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:53.454665    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:53.458593    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:53.458593    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:53.458593    7084 round_trippers.go:580]     Audit-Id: 7ece5a8c-b93d-441b-bb64-140a2aab3421
	I0923 13:36:53.458696    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:53.458696    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:53.458696    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:53.458696    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:53.458696    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:53 GMT
	I0923 13:36:53.458834    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:53.954823    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:53.954823    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:53.954823    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:53.954823    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:53.958679    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:53.958679    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:53.958679    7084 round_trippers.go:580]     Audit-Id: 581adf1d-fb67-404b-80ab-f5b5d5245800
	I0923 13:36:53.958679    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:53.958679    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:53.958679    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:53.958679    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:53.958679    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:54 GMT
	I0923 13:36:53.958679    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:54.455207    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:54.455207    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:54.455207    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:54.455207    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:54.459396    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:54.459396    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:54.459396    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:54.459396    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:54 GMT
	I0923 13:36:54.459396    7084 round_trippers.go:580]     Audit-Id: c543700e-f752-4757-a270-9f6ed98efb4c
	I0923 13:36:54.459590    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:54.459590    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:54.459590    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:54.459797    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:54.460296    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:54.954116    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:54.954116    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:54.954116    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:54.954716    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:54.958038    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:54.958038    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:54.958125    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:54.958125    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:54.958125    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:54.958125    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:55 GMT
	I0923 13:36:54.958186    7084 round_trippers.go:580]     Audit-Id: af4c111c-5449-448c-bc2a-7839b743148d
	I0923 13:36:54.958186    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:54.958504    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:55.454788    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:55.454788    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:55.454788    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:55.454788    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:55.460057    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:36:55.460057    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:55.460143    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:55 GMT
	I0923 13:36:55.460143    7084 round_trippers.go:580]     Audit-Id: 413faf3d-678b-4919-8dec-646b5b5d93a3
	I0923 13:36:55.460143    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:55.460143    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:55.460143    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:55.460143    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:55.460286    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:55.954447    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:55.954447    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:55.954447    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:55.954447    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:55.959162    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:55.959162    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:55.959162    7084 round_trippers.go:580]     Audit-Id: 39172ad5-1dce-42cc-acb7-c1735d7beb52
	I0923 13:36:55.959162    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:55.959162    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:55.959162    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:55.959162    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:55.959162    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:56 GMT
	I0923 13:36:55.959354    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:56.455686    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:56.455686    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:56.455686    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:56.455686    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:56.460001    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:56.460001    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:56.460001    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:56 GMT
	I0923 13:36:56.460001    7084 round_trippers.go:580]     Audit-Id: 90716a7f-723d-4893-90ba-10842a20441d
	I0923 13:36:56.460001    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:56.460001    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:56.460001    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:56.460001    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:56.460001    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:56.460532    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:56.954053    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:56.954053    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:56.954053    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:56.954053    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:56.958535    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:56.958740    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:56.958740    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:56.958864    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:56.958864    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:56.958864    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:56.958864    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:57 GMT
	I0923 13:36:56.958864    7084 round_trippers.go:580]     Audit-Id: d372752f-c22b-491d-a79f-da84862073ba
	I0923 13:36:56.959151    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:57.454425    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:57.454425    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:57.454425    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:57.454425    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:57.457926    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:57.457926    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:57.458605    7084 round_trippers.go:580]     Audit-Id: cf551955-4c7a-4c6f-ba3f-74b38f64d78b
	I0923 13:36:57.458605    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:57.458605    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:57.458605    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:57.458605    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:57.458605    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:57 GMT
	I0923 13:36:57.458779    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:57.954737    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:57.954737    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:57.954737    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:57.954737    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:57.958831    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:57.958831    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:57.958831    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:57.958831    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:57.958831    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:57.958831    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:58 GMT
	I0923 13:36:57.959046    7084 round_trippers.go:580]     Audit-Id: e7295743-45c0-4181-b492-618e701d4626
	I0923 13:36:57.959046    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:57.959187    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:58.455414    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:58.455489    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:58.455489    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:58.455489    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:58.459918    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:58.459994    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:58.459994    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:58.459994    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:58.459994    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:58.459994    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:58 GMT
	I0923 13:36:58.460081    7084 round_trippers.go:580]     Audit-Id: 3729aed3-fb12-4437-8da9-530bc322500e
	I0923 13:36:58.460081    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:58.460305    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:58.460994    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:58.955035    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:58.955035    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:58.955035    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:58.955035    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:58.959117    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:58.959117    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:58.959117    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:58.959117    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:58.959117    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:58.959334    7084 round_trippers.go:580]     Audit-Id: f7f787d3-83b4-4592-a7d2-98c8750e72f9
	I0923 13:36:58.959334    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:58.959334    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:58.959526    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:59.454377    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:59.454377    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.454377    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.454377    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.458362    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:59.458362    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.458362    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.458362    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.458362    7084 round_trippers.go:580]     Audit-Id: bb1c7dc7-d8ca-40c2-a5bc-42a9c743cb0a
	I0923 13:36:59.458362    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.458362    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.458362    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.458362    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2012","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0923 13:36:59.459488    7084 node_ready.go:49] node "multinode-560300-m02" has status "Ready":"True"
	I0923 13:36:59.459576    7084 node_ready.go:38] duration metric: took 16.5053179s for node "multinode-560300-m02" to be "Ready" ...
	I0923 13:36:59.459576    7084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:36:59.459756    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:36:59.459756    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.459844    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.459844    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.464077    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:59.464077    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.464077    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.464077    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.464077    7084 round_trippers.go:580]     Audit-Id: 78b592f8-978b-4512-b295-3a9fa37787b1
	I0923 13:36:59.464077    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.464077    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.464077    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.465585    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2015"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89511 chars]
	I0923 13:36:59.471115    7084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.471115    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:36:59.471115    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.471115    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.471115    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.473671    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.473671    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.473671    7084 round_trippers.go:580]     Audit-Id: 8f475265-4aac-4bb7-b416-86d707aa0689
	I0923 13:36:59.473671    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.473671    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.473671    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.473671    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.473671    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.473671    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0923 13:36:59.474672    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:36:59.474672    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.474672    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.474672    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.477649    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.477649    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.477649    7084 round_trippers.go:580]     Audit-Id: 9e40a977-cd45-428a-9719-87e015046368
	I0923 13:36:59.477649    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.477649    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.477649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.477649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.477649    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.477858    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:36:59.478315    7084 pod_ready.go:93] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"True"
	I0923 13:36:59.478387    7084 pod_ready.go:82] duration metric: took 7.2718ms for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.478387    7084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.478512    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:36:59.478512    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.478512    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.478512    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.480889    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.480889    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.480889    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.480889    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.480889    7084 round_trippers.go:580]     Audit-Id: 69843a5d-4ad7-4ca2-80ee-75bb64f4f1c6
	I0923 13:36:59.480968    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.480968    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.480968    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.481103    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1822","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0923 13:36:59.481589    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:36:59.481589    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.481589    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.481589    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.483888    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.483888    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.483888    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.483888    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.483888    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.483888    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.483888    7084 round_trippers.go:580]     Audit-Id: ec2df474-0040-4a8a-9edd-8cc51bcc38d0
	I0923 13:36:59.483888    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.483888    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:36:59.484523    7084 pod_ready.go:93] pod "etcd-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:36:59.484594    7084 pod_ready.go:82] duration metric: took 6.1667ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.484594    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.484712    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:36:59.484712    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.484712    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.484712    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.487169    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.487169    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.487169    7084 round_trippers.go:580]     Audit-Id: 5721af0c-b8f9-456a-9288-99902124c5de
	I0923 13:36:59.487169    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.487169    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.487169    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.487169    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.487169    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.487169    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"c88cb5c4-fe30-4354-bf55-1f281cf11190","resourceVersion":"1816","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.156.56:8443","kubernetes.io/config.hash":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.mirror":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.seen":"2024-09-23T13:34:12.942044692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0923 13:36:59.487864    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:36:59.487935    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.487935    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.487972    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.490041    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.490950    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.490950    7084 round_trippers.go:580]     Audit-Id: 46fa7b0b-bf01-4d80-a06d-8db181bc1f02
	I0923 13:36:59.490950    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.490950    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.491019    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.491019    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.491019    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.491160    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:36:59.491809    7084 pod_ready.go:93] pod "kube-apiserver-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:36:59.491881    7084 pod_ready.go:82] duration metric: took 7.2865ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.491964    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.491964    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:36:59.491964    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.491964    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.491964    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.494519    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.494519    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.494519    7084 round_trippers.go:580]     Audit-Id: 7aeb8129-1fa5-483b-9e0e-436cbe38148e
	I0923 13:36:59.494519    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.494519    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.494519    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.494519    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.494519    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.494519    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"1809","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0923 13:36:59.495515    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:36:59.495515    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.495515    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.495515    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.498649    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.498649    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.498649    7084 round_trippers.go:580]     Audit-Id: d325e9e5-f3a9-42ab-8475-c71e1d61bde5
	I0923 13:36:59.498649    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.498649    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.498649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.498649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.498649    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.498819    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:36:59.499181    7084 pod_ready.go:93] pod "kube-controller-manager-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:36:59.499181    7084 pod_ready.go:82] duration metric: took 7.2168ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.499181    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.654679    7084 request.go:632] Waited for 155.4256ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:36:59.654679    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:36:59.654679    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.654679    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.654679    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.657963    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:59.657963    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.658260    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.658260    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.658260    7084 round_trippers.go:580]     Audit-Id: 54b6ecbb-80eb-4ca0-a52c-27d0523ed777
	I0923 13:36:59.658260    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.658260    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.658260    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.658607    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dbkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"44a5a18e-0e93-4293-8d4b-13e3ec9acfef","resourceVersion":"1660","creationTimestamp":"2024-09-23T13:20:08Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:20:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0923 13:36:59.855474    7084 request.go:632] Waited for 196.3273ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:36:59.855694    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:36:59.855694    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.855694    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.855694    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.859370    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:59.859370    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.859370    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.859370    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.859370    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.859370    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.859370    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:36:59.859370    7084 round_trippers.go:580]     Audit-Id: db6acff9-50d7-4256-a88a-190f0cde17e3
	I0923 13:36:59.859569    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"781efd95-4e81-4850-a300-9cef56c5e6d4","resourceVersion":"1852","creationTimestamp":"2024-09-23T13:30:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_30_01_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:30:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4401 chars]
	I0923 13:36:59.860021    7084 pod_ready.go:98] node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:36:59.860021    7084 pod_ready.go:82] duration metric: took 360.8151ms for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	E0923 13:36:59.860021    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:36:59.860091    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.054755    7084 request.go:632] Waited for 194.6504ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:37:00.054755    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:37:00.054755    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.054755    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.054755    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.058701    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:37:00.058701    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.058701    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.058701    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:37:00.058701    7084 round_trippers.go:580]     Audit-Id: 8084e2c0-d695-4523-b553-c56c17152654
	I0923 13:37:00.058701    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.058701    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.058701    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.058989    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"1982","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0923 13:37:00.254684    7084 request.go:632] Waited for 195.1237ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:37:00.254684    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:37:00.254684    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.254684    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.254684    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.258754    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:37:00.258987    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.258987    7084 round_trippers.go:580]     Audit-Id: b172aac1-9df1-4847-8d93-262f13940f95
	I0923 13:37:00.258987    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.258987    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.258987    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.258987    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.258987    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:37:00.259320    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2012","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0923 13:37:00.259320    7084 pod_ready.go:93] pod "kube-proxy-g5t97" in "kube-system" namespace has status "Ready":"True"
	I0923 13:37:00.259320    7084 pod_ready.go:82] duration metric: took 399.2022ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.259320    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.454884    7084 request.go:632] Waited for 194.9867ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:37:00.454884    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:37:00.454884    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.454884    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.454884    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.460096    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:37:00.460244    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.460244    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.460311    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.460311    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.460311    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.460311    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:37:00.460311    7084 round_trippers.go:580]     Audit-Id: 856191c6-b9bd-4446-a4df-e0ae34422995
	I0923 13:37:00.460922    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"1800","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0923 13:37:00.655345    7084 request.go:632] Waited for 193.48ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:37:00.655624    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:37:00.655624    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.655624    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.655624    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.659250    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:37:00.659250    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.659250    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:37:00.659250    7084 round_trippers.go:580]     Audit-Id: d8267fb9-6eed-4c10-87b0-42be2b27c4a1
	I0923 13:37:00.659250    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.659250    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.659250    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.659250    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.659827    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:37:00.660525    7084 pod_ready.go:93] pod "kube-proxy-rgmcw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:37:00.660618    7084 pod_ready.go:82] duration metric: took 401.2706ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.660618    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.855392    7084 request.go:632] Waited for 194.5968ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:37:00.855392    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:37:00.855392    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.855948    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.855948    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.859535    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:37:00.859744    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.859744    7084 round_trippers.go:580]     Audit-Id: 6b492a2c-3f2b-4c7d-b1bb-ddcc422ada3a
	I0923 13:37:00.859744    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.859744    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.859744    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.859744    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.859744    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:01 GMT
	I0923 13:37:00.859941    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"1810","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0923 13:37:01.054747    7084 request.go:632] Waited for 194.2927ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:37:01.054747    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:37:01.054747    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:01.054747    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:01.054747    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:01.058965    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:37:01.058965    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:01.059508    7084 round_trippers.go:580]     Audit-Id: e18a1540-d0c9-4090-b41f-5cbdc6ab80fd
	I0923 13:37:01.059508    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:01.059508    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:01.059508    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:01.059508    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:01.059508    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:01 GMT
	I0923 13:37:01.059710    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:37:01.060112    7084 pod_ready.go:93] pod "kube-scheduler-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:37:01.060195    7084 pod_ready.go:82] duration metric: took 399.467ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:01.060195    7084 pod_ready.go:39] duration metric: took 1.6005102s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:37:01.060195    7084 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:37:01.069491    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:37:01.094864    7084 system_svc.go:56] duration metric: took 34.6672ms WaitForService to wait for kubelet
	I0923 13:37:01.094864    7084 kubeadm.go:582] duration metric: took 18.411296s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:37:01.094864    7084 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:37:01.255537    7084 request.go:632] Waited for 160.6621ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes
	I0923 13:37:01.255537    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes
	I0923 13:37:01.255537    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:01.255537    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:01.255537    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:01.261569    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:37:01.261673    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:01.261673    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:01 GMT
	I0923 13:37:01.261673    7084 round_trippers.go:580]     Audit-Id: c944582e-c764-4693-87ac-9d216cc055d3
	I0923 13:37:01.261762    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:01.261762    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:01.261762    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:01.261762    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:01.261891    7084 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2018"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15608 chars]
	I0923 13:37:01.262542    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:37:01.262542    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:37:01.262542    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:37:01.262542    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:37:01.262542    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:37:01.262542    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:37:01.262542    7084 node_conditions.go:105] duration metric: took 167.6663ms to run NodePressure ...
	I0923 13:37:01.262542    7084 start.go:241] waiting for startup goroutines ...
	I0923 13:37:01.263063    7084 start.go:255] writing updated cluster config ...
	I0923 13:37:01.266908    7084 out.go:201] 
	I0923 13:37:01.270063    7084 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:37:01.281074    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:37:01.281074    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:37:01.287435    7084 out.go:177] * Starting "multinode-560300-m03" worker node in "multinode-560300" cluster
	I0923 13:37:01.289774    7084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:37:01.289774    7084 cache.go:56] Caching tarball of preloaded images
	I0923 13:37:01.289774    7084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 13:37:01.289774    7084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:37:01.289774    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:37:01.298995    7084 start.go:360] acquireMachinesLock for multinode-560300-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:37:01.299492    7084 start.go:364] duration metric: took 496.4µs to acquireMachinesLock for "multinode-560300-m03"
	I0923 13:37:01.299618    7084 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:37:01.299649    7084 fix.go:54] fixHost starting: m03
	I0923 13:37:01.299745    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:03.121132    7084 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 13:37:03.122017    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:03.122075    7084 fix.go:112] recreateIfNeeded on multinode-560300-m03: state=Stopped err=<nil>
	W0923 13:37:03.122075    7084 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:37:03.125875    7084 out.go:177] * Restarting existing hyperv VM for "multinode-560300-m03" ...
	I0923 13:37:03.127697    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-560300-m03
	I0923 13:37:05.867779    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:05.868419    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:05.868419    7084 main.go:141] libmachine: Waiting for host to start...
	I0923 13:37:05.868419    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:07.848897    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:07.848897    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:07.849040    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:10.010052    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:10.010052    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:11.010634    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:12.938704    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:12.938704    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:12.938880    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:15.131010    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:15.131010    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:16.131814    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:18.034804    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:18.035315    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:18.035367    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:20.202934    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:20.202934    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:21.203323    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:23.129870    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:23.129870    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:23.130744    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:25.305058    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:25.305344    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:26.305730    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:28.244684    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:28.244684    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:28.244748    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:30.573163    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:30.574154    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:30.576308    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:32.434487    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:32.434487    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:32.434901    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:34.660736    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:34.660901    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:34.661245    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:37:34.664535    7084 machine.go:93] provisionDockerMachine start ...
	I0923 13:37:34.664679    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:36.500120    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:36.500120    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:36.501259    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:38.714540    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:38.714589    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:38.717773    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:37:38.718495    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:37:38.718495    7084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:37:38.854847    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:37:38.854847    7084 buildroot.go:166] provisioning hostname "multinode-560300-m03"
	I0923 13:37:38.854847    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:40.662730    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:40.663733    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:40.663733    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:42.869224    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:42.869224    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:42.873221    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:37:42.873599    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:37:42.873669    7084 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-560300-m03 && echo "multinode-560300-m03" | sudo tee /etc/hostname
	I0923 13:37:43.039310    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-560300-m03
	
	I0923 13:37:43.039394    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:44.885728    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:44.885728    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:44.885728    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:47.096980    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:47.097067    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:47.101955    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:37:47.102865    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:37:47.102933    7084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-560300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-560300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-560300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:37:47.252655    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:37:47.252655    7084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 13:37:47.252655    7084 buildroot.go:174] setting up certificates
	I0923 13:37:47.252655    7084 provision.go:84] configureAuth start
	I0923 13:37:47.252655    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:49.078490    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:49.078848    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:49.078848    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:51.338272    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:51.338272    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:51.338932    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:53.167516    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:53.167516    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:53.168462    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:55.397846    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:55.397846    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:55.397846    7084 provision.go:143] copyHostCerts
	I0923 13:37:55.397846    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 13:37:55.397846    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 13:37:55.397846    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 13:37:55.398502    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 13:37:55.399133    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 13:37:55.399133    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 13:37:55.399133    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 13:37:55.399852    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 13:37:55.400336    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 13:37:55.400856    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 13:37:55.400856    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 13:37:55.401015    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 13:37:55.401820    7084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-560300-m03 san=[127.0.0.1 172.19.145.249 localhost minikube multinode-560300-m03]
	I0923 13:37:55.527453    7084 provision.go:177] copyRemoteCerts
	I0923 13:37:55.533864    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:37:55.533864    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:57.349394    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:57.349394    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:57.349394    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:59.533723    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:59.534366    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:59.534848    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:37:59.654320    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1191516s)
	I0923 13:37:59.654373    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 13:37:59.654735    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:37:59.698483    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 13:37:59.698669    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0923 13:37:59.739774    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 13:37:59.740337    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 13:37:59.782006    7084 provision.go:87] duration metric: took 12.5285055s to configureAuth
	I0923 13:37:59.782006    7084 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:37:59.782534    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:37:59.782630    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:01.648883    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:01.649103    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:01.649103    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:03.899457    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:03.899528    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:03.902921    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:03.903517    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:03.903517    7084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:38:04.052590    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 13:38:04.052643    7084 buildroot.go:70] root file system type: tmpfs
	I0923 13:38:04.052880    7084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:38:04.052928    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:05.888625    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:05.888625    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:05.888715    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:08.142028    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:08.142988    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:08.148270    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:08.148867    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:08.148867    7084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.156.56"
	Environment="NO_PROXY=172.19.156.56,172.19.147.0"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:38:08.310605    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.156.56
	Environment=NO_PROXY=172.19.156.56,172.19.147.0
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:38:08.310605    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:10.129666    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:10.129666    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:10.130128    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:12.317476    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:12.318234    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:12.321900    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:12.322602    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:12.322602    7084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:38:14.553169    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 13:38:14.553169    7084 machine.go:96] duration metric: took 39.8858618s to provisionDockerMachine
	I0923 13:38:14.553169    7084 start.go:293] postStartSetup for "multinode-560300-m03" (driver="hyperv")
	I0923 13:38:14.553169    7084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:38:14.562368    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:38:14.562368    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:16.376794    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:16.377199    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:16.377199    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:18.594118    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:18.594118    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:18.594118    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:38:18.703705    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1410577s)
	I0923 13:38:18.711993    7084 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:38:18.721226    7084 command_runner.go:130] > NAME=Buildroot
	I0923 13:38:18.721226    7084 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:38:18.721226    7084 command_runner.go:130] > ID=buildroot
	I0923 13:38:18.721226    7084 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:38:18.721226    7084 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:38:18.721226    7084 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:38:18.721226    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 13:38:18.721226    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 13:38:18.721796    7084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 13:38:18.721796    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 13:38:18.732265    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:38:18.747039    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 13:38:18.791824    7084 start.go:296] duration metric: took 4.238368s for postStartSetup
	I0923 13:38:18.791882    7084 fix.go:56] duration metric: took 1m17.4870071s for fixHost
	I0923 13:38:18.791971    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:20.600512    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:20.600512    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:20.600814    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:22.815507    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:22.816075    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:22.819176    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:22.819756    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:22.819756    7084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:38:22.952098    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098703.161859191
	
	I0923 13:38:22.952165    7084 fix.go:216] guest clock: 1727098703.161859191
	I0923 13:38:22.952165    7084 fix.go:229] Guest: 2024-09-23 13:38:23.161859191 +0000 UTC Remote: 2024-09-23 13:38:18.7918821 +0000 UTC m=+357.767357601 (delta=4.369977091s)
	I0923 13:38:22.952239    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:24.747348    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:24.747348    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:24.747348    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:26.913173    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:26.913173    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:26.917172    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:26.918172    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:26.918172    7084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727098702
	I0923 13:38:27.077868    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 13:38:22 UTC 2024
	
	I0923 13:38:27.077868    7084 fix.go:236] clock set: Mon Sep 23 13:38:22 UTC 2024
	 (err=<nil>)
	I0923 13:38:27.077868    7084 start.go:83] releasing machines lock for "multinode-560300-m03", held for 1m25.7725906s
	I0923 13:38:27.078174    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:28.912657    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:28.913641    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:28.913704    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:31.111619    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:31.111619    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:31.113793    7084 out.go:177] * Found network options:
	I0923 13:38:31.117147    7084 out.go:177]   - NO_PROXY=172.19.156.56,172.19.147.0
	W0923 13:38:31.118828    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:38:31.118828    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:38:31.122256    7084 out.go:177]   - NO_PROXY=172.19.156.56,172.19.147.0
	W0923 13:38:31.124702    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:38:31.125669    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:38:31.126561    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:38:31.126561    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:38:31.128139    7084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 13:38:31.128139    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:31.134783    7084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:38:31.134783    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:33.030475    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:33.030475    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:33.030475    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:33.034424    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:33.034424    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:33.034424    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:35.311215    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:35.311215    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:35.312090    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:38:35.333848    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:35.334671    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:35.334714    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:38:35.414476    7084 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0923 13:38:35.414505    7084 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.2794336s)
	W0923 13:38:35.414505    7084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:38:35.423340    7084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:38:35.427558    7084 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 13:38:35.427975    7084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.2991279s)
	W0923 13:38:35.428006    7084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 13:38:35.456172    7084 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 13:38:35.456233    7084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:38:35.456233    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:38:35.456410    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:38:35.485994    7084 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 13:38:35.496227    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 13:38:35.525772    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0923 13:38:35.541938    7084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 13:38:35.541993    7084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 13:38:35.545085    7084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:38:35.553979    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:38:35.584163    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:38:35.610591    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:38:35.636377    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:38:35.665034    7084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:38:35.692446    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:38:35.717576    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:38:35.746623    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:38:35.772585    7084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:38:35.791047    7084 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:38:35.791202    7084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:38:35.799442    7084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:38:35.828323    7084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:38:35.851345    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:36.050417    7084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:38:36.082211    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:38:36.093434    7084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:38:36.113188    7084 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 13:38:36.113226    7084 command_runner.go:130] > [Unit]
	I0923 13:38:36.113226    7084 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 13:38:36.113226    7084 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 13:38:36.113264    7084 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 13:38:36.113264    7084 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 13:38:36.113264    7084 command_runner.go:130] > StartLimitBurst=3
	I0923 13:38:36.113264    7084 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 13:38:36.113264    7084 command_runner.go:130] > [Service]
	I0923 13:38:36.113264    7084 command_runner.go:130] > Type=notify
	I0923 13:38:36.113264    7084 command_runner.go:130] > Restart=on-failure
	I0923 13:38:36.113317    7084 command_runner.go:130] > Environment=NO_PROXY=172.19.156.56
	I0923 13:38:36.113317    7084 command_runner.go:130] > Environment=NO_PROXY=172.19.156.56,172.19.147.0
	I0923 13:38:36.113317    7084 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 13:38:36.113317    7084 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 13:38:36.113375    7084 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 13:38:36.113375    7084 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 13:38:36.113375    7084 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 13:38:36.113375    7084 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 13:38:36.113438    7084 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 13:38:36.113438    7084 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 13:38:36.113438    7084 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 13:38:36.113438    7084 command_runner.go:130] > ExecStart=
	I0923 13:38:36.113438    7084 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 13:38:36.113438    7084 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 13:38:36.113531    7084 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 13:38:36.113531    7084 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 13:38:36.113531    7084 command_runner.go:130] > LimitNOFILE=infinity
	I0923 13:38:36.113531    7084 command_runner.go:130] > LimitNPROC=infinity
	I0923 13:38:36.113531    7084 command_runner.go:130] > LimitCORE=infinity
	I0923 13:38:36.113531    7084 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 13:38:36.113599    7084 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 13:38:36.113599    7084 command_runner.go:130] > TasksMax=infinity
	I0923 13:38:36.113599    7084 command_runner.go:130] > TimeoutStartSec=0
	I0923 13:38:36.113599    7084 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 13:38:36.113645    7084 command_runner.go:130] > Delegate=yes
	I0923 13:38:36.113645    7084 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 13:38:36.113645    7084 command_runner.go:130] > KillMode=process
	I0923 13:38:36.113645    7084 command_runner.go:130] > [Install]
	I0923 13:38:36.113645    7084 command_runner.go:130] > WantedBy=multi-user.target
	I0923 13:38:36.121945    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:38:36.151090    7084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:38:36.188835    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:38:36.222288    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:38:36.253757    7084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:38:36.311033    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:38:36.331924    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:38:36.362635    7084 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 13:38:36.372575    7084 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:38:36.378049    7084 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 13:38:36.386730    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:38:36.403592    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:38:36.442733    7084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:38:36.624828    7084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:38:36.797658    7084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:38:36.797658    7084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:38:36.841670    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:37.029671    7084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:38:39.631170    7084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6012424s)
	I0923 13:38:39.640913    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:38:39.668955    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:38:39.698916    7084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:38:39.873959    7084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:38:40.051348    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:40.248692    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:38:40.282026    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:38:40.312081    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:40.494059    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:38:40.590945    7084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:38:40.602990    7084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:38:40.615108    7084 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 13:38:40.615108    7084 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:38:40.615108    7084 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0923 13:38:40.615108    7084 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 13:38:40.615108    7084 command_runner.go:130] > Access: 2024-09-23 13:38:40.736151606 +0000
	I0923 13:38:40.615108    7084 command_runner.go:130] > Modify: 2024-09-23 13:38:40.736151606 +0000
	I0923 13:38:40.615108    7084 command_runner.go:130] > Change: 2024-09-23 13:38:40.740151448 +0000
	I0923 13:38:40.615108    7084 command_runner.go:130] >  Birth: -
	I0923 13:38:40.615108    7084 start.go:563] Will wait 60s for crictl version
	I0923 13:38:40.624166    7084 ssh_runner.go:195] Run: which crictl
	I0923 13:38:40.628868    7084 command_runner.go:130] > /usr/bin/crictl
	I0923 13:38:40.639595    7084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:38:40.684274    7084 command_runner.go:130] > Version:  0.1.0
	I0923 13:38:40.684274    7084 command_runner.go:130] > RuntimeName:  docker
	I0923 13:38:40.684274    7084 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 13:38:40.684274    7084 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:38:40.684274    7084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:38:40.693556    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:38:40.720059    7084 command_runner.go:130] > 27.3.0
	I0923 13:38:40.730664    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:38:40.761424    7084 command_runner.go:130] > 27.3.0
	I0923 13:38:40.767197    7084 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:38:40.770417    7084 out.go:177]   - env NO_PROXY=172.19.156.56
	I0923 13:38:40.773415    7084 out.go:177]   - env NO_PROXY=172.19.156.56,172.19.147.0
	I0923 13:38:40.774998    7084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 13:38:40.778975    7084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 13:38:40.778975    7084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 13:38:40.778975    7084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 13:38:40.778975    7084 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 13:38:40.781695    7084 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 13:38:40.781695    7084 ip.go:214] interface addr: 172.19.144.1/20
	I0923 13:38:40.791028    7084 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 13:38:40.798144    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:38:40.820222    7084 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:38:40.820964    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:38:40.821476    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:38:42.635928    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:42.635928    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:42.636342    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:38:42.636947    7084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300 for IP: 172.19.145.249
	I0923 13:38:42.637011    7084 certs.go:194] generating shared ca certs ...
	I0923 13:38:42.637011    7084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:38:42.637554    7084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 13:38:42.637752    7084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 13:38:42.637752    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:38:42.637752    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:38:42.638311    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:38:42.638404    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:38:42.638865    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 13:38:42.639097    7084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 13:38:42.639174    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 13:38:42.639407    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 13:38:42.639595    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 13:38:42.639781    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 13:38:42.639781    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 13:38:42.640310    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 13:38:42.640406    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 13:38:42.640557    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:42.640725    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:38:42.687509    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 13:38:42.729244    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:38:42.772828    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:38:42.815440    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 13:38:42.861558    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 13:38:42.903530    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:38:42.961912    7084 ssh_runner.go:195] Run: openssl version
	I0923 13:38:42.971947    7084 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:38:42.981406    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 13:38:43.009975    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 13:38:43.018070    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:38:43.018152    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:38:43.027949    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 13:38:43.038880    7084 command_runner.go:130] > 3ec20f2e
	I0923 13:38:43.046778    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:38:43.076351    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:38:43.111997    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:43.118533    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:43.118533    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:43.129738    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:43.139279    7084 command_runner.go:130] > b5213941
	I0923 13:38:43.147492    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:38:43.174275    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 13:38:43.203636    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 13:38:43.211564    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:38:43.211564    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:38:43.220668    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 13:38:43.229152    7084 command_runner.go:130] > 51391683
	I0923 13:38:43.237267    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 13:38:43.266182    7084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:38:43.272295    7084 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:38:43.272295    7084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:38:43.272652    7084 kubeadm.go:934] updating node {m03 172.19.145.249 0 v1.31.1  false true} ...
	I0923 13:38:43.272652    7084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-560300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.145.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:38:43.281424    7084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:38:43.299642    7084 command_runner.go:130] > kubeadm
	I0923 13:38:43.299642    7084 command_runner.go:130] > kubectl
	I0923 13:38:43.299642    7084 command_runner.go:130] > kubelet
	I0923 13:38:43.299716    7084 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:38:43.307016    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0923 13:38:43.322541    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0923 13:38:43.351644    7084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:38:43.393421    7084 ssh_runner.go:195] Run: grep 172.19.156.56	control-plane.minikube.internal$ /etc/hosts
	I0923 13:38:43.399601    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.156.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:38:43.430444    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:43.610013    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:38:43.636859    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:38:43.637688    7084 start.go:317] joinCluster: &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:38:43.637688    7084 start.go:330] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:38:43.637688    7084 host.go:66] Checking if "multinode-560300-m03" exists ...
	I0923 13:38:43.638620    7084 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:38:43.638880    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:38:43.639592    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:38:45.523193    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:45.523193    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:45.523193    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:38:45.523805    7084 api_server.go:166] Checking apiserver status ...
	I0923 13:38:45.532625    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:38:45.533144    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:38:47.405620    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:47.405620    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:47.405761    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:49.598332    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:38:49.598332    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:49.599189    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:38:49.702991    7084 command_runner.go:130] > 1960
	I0923 13:38:49.703097    7084 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.1701901s)
	I0923 13:38:49.716052    7084 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup
	W0923 13:38:49.733761    7084 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:38:49.746078    7084 ssh_runner.go:195] Run: ls
	I0923 13:38:49.753090    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:38:49.760095    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 200:
	ok
	I0923 13:38:49.768024    7084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-560300-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0923 13:38:49.912268    7084 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-z9mrc, kube-system/kube-proxy-dbkdp
	I0923 13:38:49.914509    7084 command_runner.go:130] > node/multinode-560300-m03 cordoned
	I0923 13:38:49.915099    7084 command_runner.go:130] > node/multinode-560300-m03 drained
	I0923 13:38:49.915099    7084 node.go:128] successfully drained node "multinode-560300-m03"
	I0923 13:38:49.915207    7084 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0923 13:38:49.915207    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:51.777127    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:51.777127    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:51.777475    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:54.024513    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:54.024513    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:54.024513    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:38:54.421750    7084 command_runner.go:130] ! W0923 13:38:54.642210    1583 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0923 13:38:54.640777    7084 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Stopping the kubelet service
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0923 13:38:54.640996    7084 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0923 13:38:54.640996    7084 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0923 13:38:54.640996    7084 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0923 13:38:54.640996    7084 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0923 13:38:54.640996    7084 command_runner.go:130] > to reset your system's IPVS tables.
	I0923 13:38:54.641062    7084 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0923 13:38:54.641062    7084 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0923 13:38:54.641062    7084 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.7255353s)
	I0923 13:38:54.641062    7084 node.go:155] successfully reset node "multinode-560300-m03"
	I0923 13:38:54.642174    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:38:54.642754    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:38:54.643857    7084 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0923 13:38:54.644200    7084 round_trippers.go:463] DELETE https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:38:54.644284    7084 round_trippers.go:469] Request Headers:
	I0923 13:38:54.644284    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:38:54.644284    7084 round_trippers.go:473]     Content-Type: application/json
	I0923 13:38:54.644284    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:38:54.662355    7084 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0923 13:38:54.662355    7084 round_trippers.go:577] Response Headers:
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Audit-Id: c457f45c-7ca2-41ac-a655-3b46b43d0d24
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:38:54.662355    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:38:54.662355    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Content-Length: 171
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:38:54 GMT
	I0923 13:38:54.662355    7084 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-560300-m03","kind":"nodes","uid":"781efd95-4e81-4850-a300-9cef56c5e6d4"}}
	I0923 13:38:54.662355    7084 node.go:180] successfully deleted node "multinode-560300-m03"
	I0923 13:38:54.662355    7084 start.go:334] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:38:54.662355    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 13:38:54.662355    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:38:56.497598    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:56.498352    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:56.498467    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:58.687812    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:38:58.687812    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:58.688784    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:38:59.052349    7084 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hf8qg0.xpq656vak932fgac --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 13:38:59.052562    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3899105s)
	I0923 13:38:59.052562    7084 start.go:343] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:38:59.052562    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hf8qg0.xpq656vak932fgac --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m03"
	I0923 13:38:59.114513    7084 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:38:59.268397    7084 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0923 13:38:59.268470    7084 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0923 13:38:59.330880    7084 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:38:59.330880    7084 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:38:59.330880    7084 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 13:38:59.523901    7084 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:39:00.028729    7084 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 504.579739ms
	I0923 13:39:00.028729    7084 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0923 13:39:00.558000    7084 command_runner.go:130] > This node has joined the cluster:
	I0923 13:39:00.558035    7084 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0923 13:39:00.558035    7084 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0923 13:39:00.558035    7084 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0923 13:39:00.561267    7084 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:39:00.561752    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hf8qg0.xpq656vak932fgac --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m03": (1.5090883s)
	I0923 13:39:00.561752    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 13:39:00.750913    7084 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0923 13:39:00.935767    7084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-560300-m03 minikube.k8s.io/updated_at=2024_09_23T13_39_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=multinode-560300 minikube.k8s.io/primary=false
	I0923 13:39:01.065038    7084 command_runner.go:130] > node/multinode-560300-m03 labeled
	I0923 13:39:01.065224    7084 start.go:319] duration metric: took 17.4262999s to joinCluster
	I0923 13:39:01.065429    7084 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:39:01.065886    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:39:01.070555    7084 out.go:177] * Verifying Kubernetes components...
	I0923 13:39:01.081792    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:39:01.274835    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:39:01.300275    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:39:01.300901    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:39:01.300901    7084 node_ready.go:35] waiting up to 6m0s for node "multinode-560300-m03" to be "Ready" ...
	I0923 13:39:01.300901    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:01.300901    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:01.300901    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:01.300901    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:01.305300    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:01.305382    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:01.305382    7084 round_trippers.go:580]     Audit-Id: 6c7b89ba-77f8-4db5-bca3-cf1cac9ceb4b
	I0923 13:39:01.305382    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:01.305382    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:01.305382    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:01.305382    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:01.305382    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:01 GMT
	I0923 13:39:01.305799    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2163","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3398 chars]
	I0923 13:39:01.801788    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:01.801788    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:01.801788    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:01.801788    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:01.806113    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:01.806113    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:01.806193    7084 round_trippers.go:580]     Audit-Id: b9da55c1-165f-4a4b-bc85-12ee303b27e3
	I0923 13:39:01.806193    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:01.806193    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:01.806265    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:01.806281    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:01.806361    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:02 GMT
	I0923 13:39:01.807222    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:02.301372    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:02.301372    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:02.301372    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:02.301372    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:02.305895    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:02.305994    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:02.305994    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:02.305994    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:02.305994    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:02 GMT
	I0923 13:39:02.305994    7084 round_trippers.go:580]     Audit-Id: 5e7b1a1a-7a85-4acc-af83-77921694dfa9
	I0923 13:39:02.305994    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:02.305994    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:02.306146    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:02.801779    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:02.801779    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:02.801779    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:02.801779    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:02.805018    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:02.805514    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:02.805514    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:03 GMT
	I0923 13:39:02.805514    7084 round_trippers.go:580]     Audit-Id: 48c50f1a-0700-44e1-8ce5-51387ef7bb82
	I0923 13:39:02.805514    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:02.805514    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:02.805514    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:02.805514    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:02.805885    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:03.301297    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:03.301297    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:03.301297    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:03.301297    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:03.311715    7084 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 13:39:03.311715    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:03.311715    7084 round_trippers.go:580]     Audit-Id: d29ebcc2-f0b0-42a5-b9ff-6f927ea0173c
	I0923 13:39:03.311715    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:03.311715    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:03.311715    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:03.311715    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:03.311715    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:03 GMT
	I0923 13:39:03.311715    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:03.312701    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:03.802425    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:03.802497    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:03.802497    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:03.802572    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:03.806131    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:03.806236    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:03.806236    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:03.806236    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:03.806236    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:03.806236    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:03.806236    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:04 GMT
	I0923 13:39:03.806236    7084 round_trippers.go:580]     Audit-Id: 905ed989-1830-4144-9816-d0753d71301a
	I0923 13:39:03.806502    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:04.302080    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:04.302080    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:04.302080    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:04.302080    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:04.305237    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:04.305237    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:04.305237    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:04.305237    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:04.305237    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:04 GMT
	I0923 13:39:04.305237    7084 round_trippers.go:580]     Audit-Id: 4d469e44-374d-43dd-843c-4d9e179a836d
	I0923 13:39:04.305237    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:04.305237    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:04.306144    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:04.801763    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:04.801763    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:04.801763    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:04.801763    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:04.806515    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:04.806515    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:04.806515    7084 round_trippers.go:580]     Audit-Id: 543e686f-a1b6-4af6-b7cf-a8708492fc2c
	I0923 13:39:04.806515    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:04.806662    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:04.806662    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:04.806662    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:04.806662    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:05 GMT
	I0923 13:39:04.806966    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:05.301915    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:05.301915    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:05.301915    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:05.301915    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:05.305320    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:05.305320    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:05.305320    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:05.305320    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:05.305320    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:05 GMT
	I0923 13:39:05.305320    7084 round_trippers.go:580]     Audit-Id: d971b20a-330b-4556-a0a8-ede400c41717
	I0923 13:39:05.305320    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:05.305320    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:05.305858    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:05.801376    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:05.801376    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:05.801376    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:05.801376    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:05.805306    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:05.805306    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:05.805306    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:05.805306    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:05.805306    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:05.805306    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:05.805306    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:06 GMT
	I0923 13:39:05.805306    7084 round_trippers.go:580]     Audit-Id: cd16fb3d-7813-4246-8f01-ba774eb50efb
	I0923 13:39:05.805659    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:05.806016    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:06.302776    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:06.302776    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:06.302776    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:06.302776    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:06.306847    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:06.306847    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:06.306847    7084 round_trippers.go:580]     Audit-Id: 2e894385-736c-4d81-aacc-f25b1356cb46
	I0923 13:39:06.306946    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:06.306946    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:06.306946    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:06.306946    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:06.306946    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:06 GMT
	I0923 13:39:06.307168    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:06.801991    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:06.801991    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:06.801991    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:06.801991    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:06.806028    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:06.806028    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:06.806170    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:06.806170    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:06.806170    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:06.806170    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:07 GMT
	I0923 13:39:06.806170    7084 round_trippers.go:580]     Audit-Id: f20adb46-f07e-4181-a4bb-af0963ed8b79
	I0923 13:39:06.806170    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:06.806369    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:07.301608    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:07.302023    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:07.302023    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:07.302023    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:07.305095    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:07.305193    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:07.305193    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:07 GMT
	I0923 13:39:07.305193    7084 round_trippers.go:580]     Audit-Id: 8c5ce651-5c0e-4cc4-aa71-cb4c99891551
	I0923 13:39:07.305193    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:07.305193    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:07.305193    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:07.305193    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:07.305323    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:07.801756    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:07.801756    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:07.801756    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:07.801756    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:07.806149    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:07.806149    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:07.806149    7084 round_trippers.go:580]     Audit-Id: 9610f864-f956-4ed7-a6a0-853cd50a1d9d
	I0923 13:39:07.806149    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:07.806149    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:07.806149    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:07.806149    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:07.806149    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:08 GMT
	I0923 13:39:07.806810    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:07.807143    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:08.301774    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:08.301774    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:08.301774    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:08.301774    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:08.307201    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:39:08.307201    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:08.307201    7084 round_trippers.go:580]     Audit-Id: e4e1c153-39f7-4f3b-ba20-786ddcd29c0f
	I0923 13:39:08.307201    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:08.307201    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:08.307201    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:08.307201    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:08.307396    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:08 GMT
	I0923 13:39:08.307547    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:08.802415    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:08.802415    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:08.802415    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:08.802415    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:08.807021    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:08.807136    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:08.807136    7084 round_trippers.go:580]     Audit-Id: 00a89292-003b-46a9-bb25-09b1d65e4635
	I0923 13:39:08.807136    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:08.807136    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:08.807136    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:08.807136    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:08.807136    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:09 GMT
	I0923 13:39:08.807345    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:09.302187    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:09.302402    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:09.302402    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:09.302402    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:09.304985    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:09.304985    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:09.304985    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:09.304985    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:09.304985    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:09.304985    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:09 GMT
	I0923 13:39:09.305854    7084 round_trippers.go:580]     Audit-Id: eaaec5c4-cac4-4af9-9311-0a6c6e7b2925
	I0923 13:39:09.305854    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:09.306089    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:09.801702    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:09.801702    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:09.801702    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:09.801702    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:09.806616    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:09.806679    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:09.806679    7084 round_trippers.go:580]     Audit-Id: cff918b5-8b0d-4484-9490-26bdb9a3c7a3
	I0923 13:39:09.806679    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:09.806679    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:09.806679    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:09.806679    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:09.806679    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:10 GMT
	I0923 13:39:09.806887    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:09.807295    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:10.302434    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:10.302434    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:10.302434    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:10.302434    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:10.306833    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:10.306867    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:10.306867    7084 round_trippers.go:580]     Audit-Id: 8e0c3172-dc67-421b-9302-aaa35642f607
	I0923 13:39:10.306867    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:10.306867    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:10.306867    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:10.306867    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:10.306867    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:10 GMT
	I0923 13:39:10.307010    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:10.801626    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:10.801626    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:10.801626    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:10.801626    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:10.806200    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:10.806200    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:10.806200    7084 round_trippers.go:580]     Audit-Id: b6789c4c-4ad9-4703-b740-4bd0300a7c3a
	I0923 13:39:10.806200    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:10.806200    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:10.806200    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:10.806200    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:10.806200    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:11 GMT
	I0923 13:39:10.806200    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:11.302500    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:11.302500    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:11.302500    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:11.302500    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:11.305869    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:11.305869    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:11.305869    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:11.305869    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:11.305869    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:11.305869    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:11 GMT
	I0923 13:39:11.305869    7084 round_trippers.go:580]     Audit-Id: 1b6fdfec-0889-44be-a3eb-eb42ddffd487
	I0923 13:39:11.305869    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:11.305869    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:11.802775    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:11.802775    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:11.802775    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:11.802775    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:11.806706    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:11.806706    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:11.806706    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:11.806706    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:11.806706    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:11.806706    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:11.806706    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:12 GMT
	I0923 13:39:11.806706    7084 round_trippers.go:580]     Audit-Id: 44ec42a2-73c4-4533-9417-76ba9fe90e18
	I0923 13:39:11.806706    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:12.302581    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:12.303191    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:12.303191    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:12.303273    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:12.309524    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:39:12.309524    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:12.309524    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:12.309524    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:12.309524    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:12 GMT
	I0923 13:39:12.309524    7084 round_trippers.go:580]     Audit-Id: 5d742447-04b1-4142-b591-c83247d021af
	I0923 13:39:12.309524    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:12.309524    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:12.309524    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:12.310264    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:12.802217    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:12.802217    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:12.802217    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:12.802217    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:12.805420    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:12.805857    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:12.805857    7084 round_trippers.go:580]     Audit-Id: 4d395a35-34ed-4653-96ad-e1c629306345
	I0923 13:39:12.805857    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:12.805857    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:12.805857    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:12.805857    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:12.805857    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:13 GMT
	I0923 13:39:12.806022    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:13.302675    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:13.302675    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:13.302675    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:13.302675    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:13.305492    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:13.305492    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:13.305492    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:13.305492    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:13.305492    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:13.305492    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:13.305492    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:13 GMT
	I0923 13:39:13.305492    7084 round_trippers.go:580]     Audit-Id: dd1d2ec0-f1d1-49f8-881c-a91cf9e0d5dc
	I0923 13:39:13.305640    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:13.802483    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:13.802483    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:13.802483    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:13.802483    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:13.806514    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:13.806514    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:13.806514    7084 round_trippers.go:580]     Audit-Id: a092f448-7eee-47a1-b62a-1bd91f1f9a2f
	I0923 13:39:13.806514    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:13.806514    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:13.806514    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:13.806514    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:13.806514    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:14 GMT
	I0923 13:39:13.806514    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:14.302494    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:14.302494    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:14.302494    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:14.302494    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:14.305585    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:14.305925    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:14.305925    7084 round_trippers.go:580]     Audit-Id: 691de75a-90da-4b1f-98f1-edf74effc4ab
	I0923 13:39:14.305995    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:14.305995    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:14.305995    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:14.305995    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:14.305995    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:14 GMT
	I0923 13:39:14.306296    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:14.802504    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:14.802504    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:14.802504    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:14.802504    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:14.806894    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:14.806894    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:14.806894    7084 round_trippers.go:580]     Audit-Id: 7f1bc79e-8ac1-4b74-8634-59962b043abc
	I0923 13:39:14.806894    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:14.806894    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:14.806894    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:14.806894    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:14.807010    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:15 GMT
	I0923 13:39:14.807157    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:14.807157    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:15.303389    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:15.303515    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.303515    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.303552    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.307839    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:15.307839    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.307839    7084 round_trippers.go:580]     Audit-Id: a92e2b6d-e33e-4f77-8a78-a2363c02d6ba
	I0923 13:39:15.307839    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.307839    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.307839    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.307839    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.307839    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:15 GMT
	I0923 13:39:15.307839    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:15.802777    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:15.802777    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.802777    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.803118    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.806455    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:15.806538    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.806538    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.806538    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.806538    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.806538    7084 round_trippers.go:580]     Audit-Id: 89772002-8cec-4c23-8e4b-55d8bbefd8e4
	I0923 13:39:15.806538    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.806538    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.806688    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2198","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3765 chars]
	I0923 13:39:15.807090    7084 node_ready.go:49] node "multinode-560300-m03" has status "Ready":"True"
	I0923 13:39:15.807172    7084 node_ready.go:38] duration metric: took 14.5052916s for node "multinode-560300-m03" to be "Ready" ...
	I0923 13:39:15.807172    7084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:39:15.807279    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:39:15.807334    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.807334    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.807334    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.812491    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:39:15.812491    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.812491    7084 round_trippers.go:580]     Audit-Id: e53f799d-d2fd-4534-a90e-d77aad9b34b9
	I0923 13:39:15.812491    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.812491    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.812491    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.812491    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.812491    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.813213    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2198"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89061 chars]
	I0923 13:39:15.820328    7084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.820506    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:39:15.820620    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.820620    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.820875    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.824923    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:15.825011    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.825011    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.825011    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.825087    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.825107    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.825107    7084 round_trippers.go:580]     Audit-Id: 1c49bccb-45db-48c2-a04f-e334daf6d282
	I0923 13:39:15.825107    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.825405    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0923 13:39:15.826244    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:15.826244    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.826244    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.826357    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.831083    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:15.831083    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.831083    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.831083    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.831083    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.831083    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.831083    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.831083    7084 round_trippers.go:580]     Audit-Id: 728cc23c-93a6-41ec-87c5-4d147551db78
	I0923 13:39:15.831785    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:15.831833    7084 pod_ready.go:93] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:15.831833    7084 pod_ready.go:82] duration metric: took 11.5037ms for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.831833    7084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.831833    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:39:15.831833    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.831833    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.831833    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.834775    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:15.834775    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.834775    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.834775    7084 round_trippers.go:580]     Audit-Id: 6e6a643e-4a1a-4abb-b2ae-17892adf9749
	I0923 13:39:15.834775    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.834775    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.834775    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.834775    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.834775    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1822","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0923 13:39:15.835776    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:15.835776    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.835776    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.835776    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.841444    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:39:15.841444    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.841444    7084 round_trippers.go:580]     Audit-Id: 3124890c-6864-41ce-8059-de10f04da53b
	I0923 13:39:15.841444    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.841444    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.841444    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.841444    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.841444    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.841444    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:15.842073    7084 pod_ready.go:93] pod "etcd-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:15.842143    7084 pod_ready.go:82] duration metric: took 10.2401ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.842143    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.842208    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:39:15.842266    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.842266    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.842266    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.844038    7084 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 13:39:15.844038    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.844038    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.844038    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.844038    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.844038    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.844038    7084 round_trippers.go:580]     Audit-Id: 00e9142a-c3ee-458b-932e-8e8130d14f2e
	I0923 13:39:15.844038    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.844038    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"c88cb5c4-fe30-4354-bf55-1f281cf11190","resourceVersion":"1816","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.156.56:8443","kubernetes.io/config.hash":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.mirror":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.seen":"2024-09-23T13:34:12.942044692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0923 13:39:15.844038    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:15.844038    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.844038    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.844038    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.848577    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:15.848645    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.848645    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.848645    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.848645    7084 round_trippers.go:580]     Audit-Id: d60d0ce8-e27d-4a4a-a0f3-a19929b45063
	I0923 13:39:15.848645    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.848645    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.848645    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.848795    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:15.849236    7084 pod_ready.go:93] pod "kube-apiserver-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:15.849236    7084 pod_ready.go:82] duration metric: took 7.0934ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.849236    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.849370    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:39:15.849370    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.849370    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.849370    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.854279    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:15.854398    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.854398    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.854398    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.854398    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.854398    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.854398    7084 round_trippers.go:580]     Audit-Id: 1c709d99-bb05-4d16-9dc9-75a89ad2ce85
	I0923 13:39:15.854398    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.854398    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"1809","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0923 13:39:15.854990    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:15.854990    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.854990    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.854990    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.857192    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:15.857192    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.857192    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.857192    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.857192    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.857192    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.857192    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.857192    7084 round_trippers.go:580]     Audit-Id: 3518da6e-cfc6-4e75-990b-5c0863cce8ee
	I0923 13:39:15.857192    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:15.857192    7084 pod_ready.go:93] pod "kube-controller-manager-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:15.857192    7084 pod_ready.go:82] duration metric: took 7.8842ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.857192    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.004737    7084 request.go:632] Waited for 147.535ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:39:16.004737    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:39:16.004737    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.004737    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.004737    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.007308    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:16.008121    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.008121    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:16.008121    7084 round_trippers.go:580]     Audit-Id: eb17622b-a2c2-4c55-a979-f780feda10c7
	I0923 13:39:16.008121    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.008121    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.008121    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.008121    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.008589    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dbkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"44a5a18e-0e93-4293-8d4b-13e3ec9acfef","resourceVersion":"2173","creationTimestamp":"2024-09-23T13:20:08Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:20:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6208 chars]
	I0923 13:39:16.203204    7084 request.go:632] Waited for 193.6269ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:16.203668    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:16.203732    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.203803    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.203803    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.210164    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:39:16.210164    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.210164    7084 round_trippers.go:580]     Audit-Id: c10d2841-3348-4969-b637-c9bea3d265ba
	I0923 13:39:16.210164    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.210164    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.210164    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.210164    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.210164    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:16.210782    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2198","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3765 chars]
	I0923 13:39:16.210782    7084 pod_ready.go:93] pod "kube-proxy-dbkdp" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:16.210782    7084 pod_ready.go:82] duration metric: took 353.5659ms for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.211308    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.403089    7084 request.go:632] Waited for 191.6361ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:39:16.403370    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:39:16.403405    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.403405    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.403405    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.406835    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:16.406914    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.406914    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.406914    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:16.406978    7084 round_trippers.go:580]     Audit-Id: 44d20b8c-f627-4d13-aaa5-db2488b8f6e3
	I0923 13:39:16.407004    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.407004    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.407086    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.407332    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"1982","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0923 13:39:16.603654    7084 request.go:632] Waited for 195.6092ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:39:16.603654    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:39:16.603654    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.603654    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.603654    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.608421    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:16.608632    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.608632    7084 round_trippers.go:580]     Audit-Id: a70481b8-8669-4697-bcf2-5c0d6c5dda29
	I0923 13:39:16.608632    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.608632    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.608632    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.608632    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.608726    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:16.608897    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2019","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3812 chars]
	I0923 13:39:16.609638    7084 pod_ready.go:93] pod "kube-proxy-g5t97" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:16.609698    7084 pod_ready.go:82] duration metric: took 398.3631ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.609698    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.803185    7084 request.go:632] Waited for 193.3841ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:39:16.803185    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:39:16.803185    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.803185    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.803185    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.807151    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:16.807151    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.807151    7084 round_trippers.go:580]     Audit-Id: 28902986-fb2d-43ed-bcf9-f07a74a30942
	I0923 13:39:16.807151    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.807299    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.807299    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.807299    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.807299    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:16.807471    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"1800","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0923 13:39:17.003159    7084 request.go:632] Waited for 194.8437ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:17.003159    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:17.003159    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:17.003159    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:17.003159    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:17.007405    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:17.007405    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:17.007469    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:17.007469    7084 round_trippers.go:580]     Audit-Id: e549d9ed-fcb2-4313-b016-547d40f67021
	I0923 13:39:17.007469    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:17.007469    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:17.007469    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:17.007469    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:17.007684    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:17.008110    7084 pod_ready.go:93] pod "kube-proxy-rgmcw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:17.008110    7084 pod_ready.go:82] duration metric: took 398.3847ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:17.008110    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:17.203844    7084 request.go:632] Waited for 195.7204ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:39:17.203844    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:39:17.203844    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:17.203844    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:17.203844    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:17.208442    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:17.208442    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:17.208442    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:17.208442    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:17.208442    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:17.208442    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:17.208442    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:17.208442    7084 round_trippers.go:580]     Audit-Id: e1d718d8-a3f3-4254-9872-19a09c9ff30f
	I0923 13:39:17.208750    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"1810","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0923 13:39:17.403679    7084 request.go:632] Waited for 194.2833ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:17.403679    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:17.403679    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:17.403679    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:17.403679    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:17.407575    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:17.407638    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:17.407638    7084 round_trippers.go:580]     Audit-Id: e20a951f-e7f4-49aa-b191-ab8ffdb57297
	I0923 13:39:17.407638    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:17.407638    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:17.407638    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:17.407703    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:17.407703    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:17.408166    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:17.408692    7084 pod_ready.go:93] pod "kube-scheduler-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:17.408774    7084 pod_ready.go:82] duration metric: took 400.6367ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:17.408774    7084 pod_ready.go:39] duration metric: took 1.6014934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:39:17.408857    7084 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:39:17.418690    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:39:17.442609    7084 system_svc.go:56] duration metric: took 33.7494ms WaitForService to wait for kubelet
	I0923 13:39:17.442609    7084 kubeadm.go:582] duration metric: took 16.3760175s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:39:17.442609    7084 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:39:17.603223    7084 request.go:632] Waited for 160.6034ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes
	I0923 13:39:17.603223    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes
	I0923 13:39:17.603223    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:17.603223    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:17.603223    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:17.608840    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:39:17.608968    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:17.608968    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:17.609049    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:17.609049    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:17.609049    7084 round_trippers.go:580]     Audit-Id: 4024b7fe-cde6-4346-868f-42a7dcb51386
	I0923 13:39:17.609049    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:17.609049    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:17.609049    7084 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2200"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14732 chars]
	I0923 13:39:17.610334    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:39:17.610334    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:39:17.610334    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:39:17.610334    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:39:17.610334    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:39:17.610334    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:39:17.610334    7084 node_conditions.go:105] duration metric: took 167.7137ms to run NodePressure ...
	I0923 13:39:17.610334    7084 start.go:241] waiting for startup goroutines ...
	I0923 13:39:17.610334    7084 start.go:255] writing updated cluster config ...
	I0923 13:39:17.620931    7084 ssh_runner.go:195] Run: rm -f paused
	I0923 13:39:17.737351    7084 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:39:17.742327    7084 out.go:177] * Done! kubectl is now configured to use "multinode-560300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.186993180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.187010284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.187132416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.294876931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.295095989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.295130598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.295240826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 cri-dockerd[1353]: time="2024-09-23T13:34:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c745d868b64c4532f8ad5bdebcbcc9ee100dae012e0ca3795632542a6b06e49/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 13:34:35 multinode-560300 cri-dockerd[1353]: time="2024-09-23T13:34:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/351b966363b271c4c844f2f95f249bab933c1dd7c4da616e5cbeabc560539187/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.604933988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.605406710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.605437318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.605673079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.751684842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.751743457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.751755561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.751849885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:50 multinode-560300 dockerd[1084]: time="2024-09-23T13:34:50.674654594Z" level=info msg="ignoring event" container=865debd751d9213807787fbbbd437ea058c6838f1690b4c94703a34e6bc419bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:34:50 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:50.675320938Z" level=info msg="shim disconnected" id=865debd751d9213807787fbbbd437ea058c6838f1690b4c94703a34e6bc419bc namespace=moby
	Sep 23 13:34:50 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:50.675376342Z" level=warning msg="cleaning up after shim disconnected" id=865debd751d9213807787fbbbd437ea058c6838f1690b4c94703a34e6bc419bc namespace=moby
	Sep 23 13:34:50 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:50.675386143Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 13:35:06 multinode-560300 dockerd[1090]: time="2024-09-23T13:35:06.238139739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:35:06 multinode-560300 dockerd[1090]: time="2024-09-23T13:35:06.238221145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:35:06 multinode-560300 dockerd[1090]: time="2024-09-23T13:35:06.238235446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:35:06 multinode-560300 dockerd[1090]: time="2024-09-23T13:35:06.239178617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17566040b9804       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       2                   0ff72b1dec7fd       storage-provisioner
	1875788bf6c4f       8c811b4aec35f                                                                                         5 minutes ago       Running             busybox                   1                   351b966363b27       busybox-7dff88458-wwgwh
	609a4fd1025a6       c69fa2e9cbf5f                                                                                         5 minutes ago       Running             coredns                   1                   9c745d868b64c       coredns-7c65d6cfc9-glx94
	3f8f7c342259d       12968670680f4                                                                                         5 minutes ago       Running             kindnet-cni               1                   df461afcdc9bf       kindnet-mdnmc
	865debd751d92       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   0ff72b1dec7fd       storage-provisioner
	b35e7e3038b34       60c005f310ff3                                                                                         5 minutes ago       Running             kube-proxy                1                   bd858198f9515       kube-proxy-rgmcw
	413a6df004359       6bab7719df100                                                                                         5 minutes ago       Running             kube-apiserver            0                   78a98649ec3e5       kube-apiserver-multinode-560300
	dd2c109781ba7       2e96e5913fc06                                                                                         5 minutes ago       Running             etcd                      0                   081a66a1431bc       etcd-multinode-560300
	95c3c32cc98ce       175ffd71cce3d                                                                                         5 minutes ago       Running             kube-controller-manager   1                   6021c04207bdf       kube-controller-manager-multinode-560300
	b3f4f9c6259d7       9aa1fad941575                                                                                         5 minutes ago       Running             kube-scheduler            1                   ab97e1f22bda9       kube-scheduler-multinode-560300
	78de2657becad       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Exited              busybox                   0                   f294b19f20ba1       busybox-7dff88458-wwgwh
	648460d0f31f3       c69fa2e9cbf5f                                                                                         26 minutes ago      Exited              coredns                   0                   eb12eb8fe1eab       coredns-7c65d6cfc9-glx94
	a83589d1098af       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              26 minutes ago      Exited              kindnet-cni               0                   0f322d00a55b9       kindnet-mdnmc
	c92a84f5caf22       60c005f310ff3                                                                                         26 minutes ago      Exited              kube-proxy                0                   cf2fc1e617749       kube-proxy-rgmcw
	117d706d07d2f       9aa1fad941575                                                                                         26 minutes ago      Exited              kube-scheduler            0                   b160f7a7a5d22       kube-scheduler-multinode-560300
	03ce0954301e2       175ffd71cce3d                                                                                         26 minutes ago      Exited              kube-controller-manager   0                   67b7e79ad6b59       kube-controller-manager-multinode-560300
	
	
	==> coredns [609a4fd1025a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 84be67bfc79374dbf0f7b1050900d3b4b08d81a78db730aed13edbe839abc3cb2446f0d06c08690ac53a97ad9f5103fd82097eeb4b4696d252f023888848e6e0
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40678 - 27872 "HINFO IN 6316078708195576795.4122069032706466927. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055088755s
	
	
	==> coredns [648460d0f31f] <==
	[INFO] 10.244.0.3:38681 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000058704s
	[INFO] 10.244.0.3:52711 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127209s
	[INFO] 10.244.0.3:54030 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000224916s
	[INFO] 10.244.0.3:55333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000045404s
	[INFO] 10.244.0.3:49850 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079305s
	[INFO] 10.244.0.3:54603 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043103s
	[INFO] 10.244.0.3:56551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014271s
	[INFO] 10.244.1.2:45863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113008s
	[INFO] 10.244.1.2:36717 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085106s
	[INFO] 10.244.1.2:43150 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082206s
	[INFO] 10.244.1.2:34236 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197714s
	[INFO] 10.244.0.3:37601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112108s
	[INFO] 10.244.0.3:60698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178513s
	[INFO] 10.244.0.3:35977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068705s
	[INFO] 10.244.0.3:54979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114608s
	[INFO] 10.244.1.2:58051 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107208s
	[INFO] 10.244.1.2:36408 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226517s
	[INFO] 10.244.1.2:33973 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210915s
	[INFO] 10.244.1.2:45767 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104007s
	[INFO] 10.244.0.3:36090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125109s
	[INFO] 10.244.0.3:46993 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240317s
	[INFO] 10.244.0.3:40120 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087606s
	[INFO] 10.244.0.3:46564 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080205s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-560300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-560300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-560300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_12_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-560300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:39:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:39:35 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:39:35 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:39:35 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:39:35 +0000   Mon, 23 Sep 2024 13:34:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.156.56
	  Hostname:    multinode-560300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 62b2b5f3fb144947abe480d0f65b087c
	  System UUID:                d1328c2e-dfd4-f844-981c-cc7a85ce582e
	  Boot ID:                    6117261d-ee87-4a2f-8732-d0e777a92cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wwgwh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-glx94                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-multinode-560300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m16s
	  kube-system                 kindnet-mdnmc                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-multinode-560300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-multinode-560300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-rgmcw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-multinode-560300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 5m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)      kubelet          Node multinode-560300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)      kubelet          Node multinode-560300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)      kubelet          Node multinode-560300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m                    kubelet          Node multinode-560300 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    26m                    kubelet          Node multinode-560300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m                    kubelet          Node multinode-560300 status is now: NodeHasSufficientPID
	  Normal  Starting                 26m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node multinode-560300 event: Registered Node multinode-560300 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-560300 status is now: NodeReady
	  Normal  Starting                 5m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node multinode-560300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node multinode-560300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node multinode-560300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m14s                  node-controller  Node multinode-560300 event: Registered Node multinode-560300 in Controller
	
	
	Name:               multinode-560300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-560300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-560300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_36_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:36:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-560300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:39:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:36:59 +0000   Mon, 23 Sep 2024 13:36:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:36:59 +0000   Mon, 23 Sep 2024 13:36:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:36:59 +0000   Mon, 23 Sep 2024 13:36:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:36:59 +0000   Mon, 23 Sep 2024 13:36:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.147.0
	  Hostname:    multinode-560300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a21f0feca74349b2b9042ee744adfa2a
	  System UUID:                05b2789d-962f-ff45-a09c-66a2273cfcfc
	  Boot ID:                    911ea883-1447-4a97-be79-edd6379e1e0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9m52c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 kindnet-qg99z              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-proxy-g5t97           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m51s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)      kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)      kubelet          Node multinode-560300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)      kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                    kubelet          Node multinode-560300-m02 status is now: NodeReady
	  Normal  Starting                 2m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m53s (x2 over 2m53s)  kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x2 over 2m53s)  kubelet          Node multinode-560300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x2 over 2m53s)  kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node multinode-560300-m02 event: Registered Node multinode-560300-m02 in Controller
	  Normal  NodeReady                2m36s                  kubelet          Node multinode-560300-m02 status is now: NodeReady
	
	
	Name:               multinode-560300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-560300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-560300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_39_00_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:39:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-560300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:39:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:39:15 +0000   Mon, 23 Sep 2024 13:39:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:39:15 +0000   Mon, 23 Sep 2024 13:39:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:39:15 +0000   Mon, 23 Sep 2024 13:39:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:39:15 +0000   Mon, 23 Sep 2024 13:39:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.145.249
	  Hostname:    multinode-560300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b038b1d4a3b4e89b31b9c9a93e1d345
	  System UUID:                7f6fbb79-0ae4-294d-a6df-c5e55efb7c3f
	  Boot ID:                    e065997d-a2d2-4fd8-ab40-61d4b4744fe7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-z9mrc       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-dbkdp    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m31s                  kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 32s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-560300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-560300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-560300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m                    kubelet          Node multinode-560300-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     9m36s (x2 over 9m36s)  kubelet          Node multinode-560300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    9m36s (x2 over 9m36s)  kubelet          Node multinode-560300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m36s (x2 over 9m36s)  kubelet          Node multinode-560300-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m36s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m20s                  kubelet          Node multinode-560300-m03 status is now: NodeReady
	  Normal  Starting                 36s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x2 over 36s)      kubelet          Node multinode-560300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x2 over 36s)      kubelet          Node multinode-560300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x2 over 36s)      kubelet          Node multinode-560300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           35s                    node-controller  Node multinode-560300-m03 event: Registered Node multinode-560300-m03 in Controller
	  Normal  NodeReady                21s                    kubelet          Node multinode-560300-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.040305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.730835] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.961525] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.266991] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep23 13:33] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.151411] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[Sep23 13:34] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +0.101723] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.480456] systemd-fstab-generator[1050]: Ignoring "noauto" option for root device
	[  +0.173357] systemd-fstab-generator[1062]: Ignoring "noauto" option for root device
	[  +0.195668] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +2.913233] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.201094] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.188180] systemd-fstab-generator[1330]: Ignoring "noauto" option for root device
	[  +0.277734] systemd-fstab-generator[1345]: Ignoring "noauto" option for root device
	[  +0.814367] systemd-fstab-generator[1474]: Ignoring "noauto" option for root device
	[  +0.103991] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.133969] systemd-fstab-generator[1616]: Ignoring "noauto" option for root device
	[  +1.240735] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.793688] kauditd_printk_skb: 30 callbacks suppressed
	[  +3.237606] systemd-fstab-generator[2446]: Ignoring "noauto" option for root device
	[ +12.290864] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.436521] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [dd2c109781ba] <==
	{"level":"info","ts":"2024-09-23T13:34:14.955649Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f91e3c1fba4ebf31","local-member-id":"4a5242b58f83d2a4","added-peer-id":"4a5242b58f83d2a4","added-peer-peer-urls":["https://172.19.153.215:2380"]}
	{"level":"info","ts":"2024-09-23T13:34:14.956044Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f91e3c1fba4ebf31","local-member-id":"4a5242b58f83d2a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:34:14.956294Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:34:14.953679Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:34:14.958573Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T13:34:14.959934Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4a5242b58f83d2a4","initial-advertise-peer-urls":["https://172.19.156.56:2380"],"listen-peer-urls":["https://172.19.156.56:2380"],"advertise-client-urls":["https://172.19.156.56:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.156.56:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T13:34:14.960114Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T13:34:14.960302Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.19.156.56:2380"}
	{"level":"info","ts":"2024-09-23T13:34:14.960424Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.19.156.56:2380"}
	{"level":"info","ts":"2024-09-23T13:34:16.610857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T13:34:16.611022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T13:34:16.611068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 received MsgPreVoteResp from 4a5242b58f83d2a4 at term 2"}
	{"level":"info","ts":"2024-09-23T13:34:16.611089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T13:34:16.611154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 received MsgVoteResp from 4a5242b58f83d2a4 at term 3"}
	{"level":"info","ts":"2024-09-23T13:34:16.611193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T13:34:16.611223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4a5242b58f83d2a4 elected leader 4a5242b58f83d2a4 at term 3"}
	{"level":"info","ts":"2024-09-23T13:34:16.616033Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4a5242b58f83d2a4","local-member-attributes":"{Name:multinode-560300 ClientURLs:[https://172.19.156.56:2379]}","request-path":"/0/members/4a5242b58f83d2a4/attributes","cluster-id":"f91e3c1fba4ebf31","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:34:16.616040Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:34:16.616577Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:34:16.618615Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:34:16.618781Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:34:16.620595Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:34:16.620792Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:34:16.622201Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.156.56:2379"}
	{"level":"info","ts":"2024-09-23T13:34:16.622584Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:39:36 up 6 min,  0 users,  load average: 0.32, 0.30, 0.15
	Linux multinode-560300 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3f8f7c342259] <==
	I0923 13:39:01.902939       1 main.go:295] Handling node with IPs: map[172.19.156.56:{}]
	I0923 13:39:01.902978       1 main.go:299] handling current node
	I0923 13:39:01.902993       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:39:01.903000       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:39:01.903313       1 main.go:295] Handling node with IPs: map[172.19.145.249:{}]
	I0923 13:39:01.903436       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.2.0/24] 
	I0923 13:39:01.903893       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.19.145.249 Flags: [] Table: 0} 
	I0923 13:39:11.903348       1 main.go:295] Handling node with IPs: map[172.19.156.56:{}]
	I0923 13:39:11.903408       1 main.go:299] handling current node
	I0923 13:39:11.903430       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:39:11.903440       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:39:11.903965       1 main.go:295] Handling node with IPs: map[172.19.145.249:{}]
	I0923 13:39:11.904089       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.2.0/24] 
	I0923 13:39:21.902844       1 main.go:295] Handling node with IPs: map[172.19.156.56:{}]
	I0923 13:39:21.902954       1 main.go:299] handling current node
	I0923 13:39:21.902973       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:39:21.902984       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:39:21.903499       1 main.go:295] Handling node with IPs: map[172.19.145.249:{}]
	I0923 13:39:21.903720       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.2.0/24] 
	I0923 13:39:31.903794       1 main.go:295] Handling node with IPs: map[172.19.156.56:{}]
	I0923 13:39:31.904829       1 main.go:299] handling current node
	I0923 13:39:31.905264       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:39:31.905311       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:39:31.905967       1 main.go:295] Handling node with IPs: map[172.19.145.249:{}]
	I0923 13:39:31.906110       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [a83589d1098a] <==
	I0923 13:31:18.964652       1 main.go:299] handling current node
	I0923 13:31:28.967066       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:31:28.967263       1 main.go:299] handling current node
	I0923 13:31:28.967409       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:31:28.967426       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:31:28.967698       1 main.go:295] Handling node with IPs: map[172.19.154.147:{}]
	I0923 13:31:28.967797       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.3.0/24] 
	I0923 13:31:38.965072       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:31:38.965222       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:31:38.965665       1 main.go:295] Handling node with IPs: map[172.19.154.147:{}]
	I0923 13:31:38.965727       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.3.0/24] 
	I0923 13:31:38.966087       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:31:38.966355       1 main.go:299] handling current node
	I0923 13:31:48.963706       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:31:48.963819       1 main.go:299] handling current node
	I0923 13:31:48.963839       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:31:48.963847       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:31:48.964013       1 main.go:295] Handling node with IPs: map[172.19.154.147:{}]
	I0923 13:31:48.964036       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.3.0/24] 
	I0923 13:31:59.165838       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:31:59.165899       1 main.go:299] handling current node
	I0923 13:31:59.165917       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:31:59.165923       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:31:59.166052       1 main.go:295] Handling node with IPs: map[172.19.154.147:{}]
	I0923 13:31:59.166058       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [413a6df00435] <==
	I0923 13:34:18.055474       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:34:18.055655       1 policy_source.go:224] refreshing policies
	I0923 13:34:18.073669       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:34:18.090604       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:34:18.090954       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:34:18.091816       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:34:18.094579       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:34:18.094610       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:34:18.098149       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 13:34:18.098259       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:34:18.099602       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 13:34:18.100374       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:34:18.100626       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:34:18.100748       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:34:18.100806       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:34:18.105250       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 13:34:18.895354       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0923 13:34:19.554970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.156.56]
	I0923 13:34:19.557761       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:34:19.575166       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 13:34:20.800845       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 13:34:20.998770       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 13:34:21.019786       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 13:34:21.192544       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 13:34:21.203172       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [03ce0954301e] <==
	I0923 13:29:50.397253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:29:50.417873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:29:55.019867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:29:55.020720       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:30:00.948213       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-560300-m03\" does not exist"
	I0923 13:30:00.948785       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:30:00.978057       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-560300-m03" podCIDRs=["10.244.3.0/24"]
	I0923 13:30:00.978437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:00.978740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:01.221075       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:01.744630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:04.343091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:11.080865       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:16.211262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:16.211320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:30:16.230161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:19.317006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:31:44.475013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:31:44.475847       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:31:44.690768       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:31:49.852885       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:31:59.793582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:31:59.825146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:31:59.880051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.219202ms"
	I0923 13:31:59.881783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.304µs"
	
	
	==> kube-controller-manager [95c3c32cc98c] <==
	I0923 13:37:05.094766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.906µs"
	I0923 13:37:05.296134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.906µs"
	I0923 13:37:05.308063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.505µs"
	I0923 13:37:06.340172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="64.107µs"
	I0923 13:37:06.374183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.905µs"
	I0923 13:37:08.347244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.575469ms"
	I0923 13:37:08.347568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="81.009µs"
	I0923 13:38:50.113500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:38:50.137165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:38:54.896422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:38:54.898384       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:39:00.425093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:39:00.425191       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-560300-m03\" does not exist"
	I0923 13:39:00.458489       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-560300-m03" podCIDRs=["10.244.2.0/24"]
	I0923 13:39:00.458691       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:00.459149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:00.789071       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:01.293235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:01.635776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:10.563381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:15.856611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m03"
	I0923 13:39:15.857076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:15.876115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:16.625053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:35.192518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300"
	
	
	==> kube-proxy [b35e7e3038b3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:34:21.011624       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:34:21.079597       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.156.56"]
	E0923 13:34:21.081635       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:34:21.328765       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:34:21.328818       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:34:21.328844       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:34:21.334895       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:34:21.336491       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:34:21.336556       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:34:21.339773       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:34:21.340668       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:34:21.340787       1 config.go:199] "Starting service config controller"
	I0923 13:34:21.340844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:34:21.341908       1 config.go:328] "Starting node config controller"
	I0923 13:34:21.341987       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:34:21.441988       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:34:21.442051       1 shared_informer.go:320] Caches are synced for node config
	I0923 13:34:21.442074       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c92a84f5caf2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:13:01.510581       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:13:01.528211       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.153.215"]
	E0923 13:13:01.528393       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:13:01.595991       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:13:01.596175       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:13:01.596207       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:13:01.601897       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:13:01.602395       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:13:01.602427       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:13:01.610743       1 config.go:199] "Starting service config controller"
	I0923 13:13:01.610798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:13:01.610828       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:13:01.610834       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:13:01.612235       1 config.go:328] "Starting node config controller"
	I0923 13:13:01.612451       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:13:01.710868       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:13:01.711136       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:13:01.712783       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [117d706d07d2] <==
	E0923 13:12:52.395522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.490447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:12:52.490806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.548160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.548442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.602117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.602162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.677098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:12:52.677310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.689862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:12:52.690136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.707741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:12:52.707845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.743202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.743233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.840286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:12:52.840633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.860952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:12:52.861450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.904935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:12:52.905322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.968156       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:12:52.968278       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 13:12:55.111169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 13:32:00.406868       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b3f4f9c6259d] <==
	I0923 13:34:16.021221       1 serving.go:386] Generated self-signed cert in-memory
	W0923 13:34:17.953141       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 13:34:17.953472       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 13:34:17.954760       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 13:34:17.954963       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 13:34:18.091227       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 13:34:18.091282       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:34:18.097212       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 13:34:18.100133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 13:34:18.100174       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 13:34:18.100217       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 13:34:18.201238       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:35:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:35:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:35:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:35:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:35:13 multinode-560300 kubelet[1623]: I0923 13:35:13.093290    1623 scope.go:117] "RemoveContainer" containerID="8ab41eeaea91bb89949d569cc393a51d4ee9ecbf8edf20a56f155faa3d280027"
	Sep 23 13:36:13 multinode-560300 kubelet[1623]: E0923 13:36:13.086653    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:36:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:36:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:36:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:36:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:37:13 multinode-560300 kubelet[1623]: E0923 13:37:13.087445    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:37:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:37:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:37:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:37:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:38:13 multinode-560300 kubelet[1623]: E0923 13:38:13.087104    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:38:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:38:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:38:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:38:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:39:13 multinode-560300 kubelet[1623]: E0923 13:39:13.085941    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:39:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:39:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:39:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:39:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-560300 -n multinode-560300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-560300 -n multinode-560300: (10.3716018s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-560300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (539.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (51.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-560300 node delete m03: exit status 1 (20.71668s)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster multinode-560300
	* Stopping node "multinode-560300-m03"  ...

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-windows-amd64.exe -p multinode-560300 node delete m03": exit status 1
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr: context deadline exceeded (0s)
multinode_test.go:424: failed to run minikube status. args "out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-560300 -n multinode-560300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-560300 -n multinode-560300: (10.520455s)
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 logs -n 25: (7.8939548s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | multinode-560300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | multinode-560300:/home/docker/cp-test_multinode-560300-m02_multinode-560300.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | multinode-560300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n multinode-560300 sudo cat                                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:24 UTC |
	|         | /home/docker/cp-test_multinode-560300-m02_multinode-560300.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:24 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m03:/home/docker/cp-test_multinode-560300-m02_multinode-560300-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n multinode-560300-m03 sudo cat                                                                   | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | /home/docker/cp-test_multinode-560300-m02_multinode-560300-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp testdata\cp-test.txt                                                                                | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:25 UTC |
	|         | multinode-560300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:25 UTC | 23 Sep 24 13:26 UTC |
	|         | multinode-560300:/home/docker/cp-test_multinode-560300-m03_multinode-560300.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | multinode-560300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n multinode-560300 sudo cat                                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | /home/docker/cp-test_multinode-560300-m03_multinode-560300.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt                                                       | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | multinode-560300-m02:/home/docker/cp-test_multinode-560300-m03_multinode-560300-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n                                                                                                 | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | multinode-560300-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-560300 ssh -n multinode-560300-m02 sudo cat                                                                   | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:26 UTC |
	|         | /home/docker/cp-test_multinode-560300-m03_multinode-560300-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-560300 node stop m03                                                                                          | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:26 UTC | 23 Sep 24 13:27 UTC |
	| node    | multinode-560300 node start                                                                                             | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:27 UTC | 23 Sep 24 13:30 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                              |                  |                   |         |                     |                     |
	| node    | list -p multinode-560300                                                                                                | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:30 UTC |                     |
	| stop    | -p multinode-560300                                                                                                     | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:30 UTC | 23 Sep 24 13:32 UTC |
	| start   | -p multinode-560300                                                                                                     | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:32 UTC | 23 Sep 24 13:39 UTC |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	| node    | list -p multinode-560300                                                                                                | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:39 UTC |                     |
	| node    | multinode-560300 node delete                                                                                            | multinode-560300 | minikube5\jenkins | v1.34.0 | 23 Sep 24 13:39 UTC |                     |
	|         | m03                                                                                                                     |                  |                   |         |                     |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 13:32:21
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 13:32:21.077470    7084 out.go:345] Setting OutFile to fd 1800 ...
	I0923 13:32:21.120826    7084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:32:21.120826    7084 out.go:358] Setting ErrFile to fd 2004...
	I0923 13:32:21.120826    7084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:32:21.138833    7084 out.go:352] Setting JSON to false
	I0923 13:32:21.141842    7084 start.go:129] hostinfo: {"hostname":"minikube5","uptime":494317,"bootTime":1726604024,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 13:32:21.141842    7084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 13:32:21.304695    7084 out.go:177] * [multinode-560300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 13:32:21.337762    7084 notify.go:220] Checking for updates...
	I0923 13:32:21.399695    7084 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:32:21.436422    7084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 13:32:21.497690    7084 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 13:32:21.515821    7084 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 13:32:21.542784    7084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 13:32:21.549606    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:32:21.550084    7084 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 13:32:26.386396    7084 out.go:177] * Using the hyperv driver based on existing profile
	I0923 13:32:26.454832    7084 start.go:297] selected driver: hyperv
	I0923 13:32:26.455626    7084 start.go:901] validating driver "hyperv" against &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:32:26.456121    7084 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 13:32:26.504002    7084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:32:26.504245    7084 cni.go:84] Creating CNI manager for ""
	I0923 13:32:26.504245    7084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:32:26.504245    7084 start.go:340] cluster config:
	{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.153.215 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:fals
e kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:32:26.504785    7084 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 13:32:26.595332    7084 out.go:177] * Starting "multinode-560300" primary control-plane node in "multinode-560300" cluster
	I0923 13:32:26.603420    7084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:32:26.604162    7084 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 13:32:26.604162    7084 cache.go:56] Caching tarball of preloaded images
	I0923 13:32:26.604162    7084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 13:32:26.604699    7084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:32:26.604939    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:32:26.607010    7084 start.go:360] acquireMachinesLock for multinode-560300: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:32:26.607010    7084 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-560300"
	I0923 13:32:26.607010    7084 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:32:26.607668    7084 fix.go:54] fixHost starting: 
	I0923 13:32:26.607820    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:28.931008    7084 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 13:32:28.931008    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:28.931008    7084 fix.go:112] recreateIfNeeded on multinode-560300: state=Stopped err=<nil>
	W0923 13:32:28.931212    7084 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:32:28.962326    7084 out.go:177] * Restarting existing hyperv VM for "multinode-560300" ...
	I0923 13:32:29.037593    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-560300
	I0923 13:32:31.897705    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:31.897911    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:31.897911    7084 main.go:141] libmachine: Waiting for host to start...
	I0923 13:32:31.897911    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:33.829108    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:33.829291    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:33.829417    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:36.029491    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:36.029491    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:37.030143    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:38.936377    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:38.936830    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:38.936912    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:41.086494    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:41.087314    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:42.087715    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:43.988407    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:43.988407    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:43.988502    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:46.248921    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:46.248921    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:47.250009    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:49.179692    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:49.180270    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:49.180428    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:51.322828    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:32:51.322899    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:52.323949    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:54.259567    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:54.259567    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:54.259567    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:32:56.533613    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:32:56.533613    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:56.535789    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:32:58.397922    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:32:58.397922    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:32:58.398507    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:00.583635    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:00.583635    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:00.584427    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:33:00.586398    7084 machine.go:93] provisionDockerMachine start ...
	I0923 13:33:00.586581    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:02.463986    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:02.463986    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:02.464846    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:04.746586    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:04.746586    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:04.753172    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:04.753717    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:04.753818    7084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:33:04.879125    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:33:04.879125    7084 buildroot.go:166] provisioning hostname "multinode-560300"
	I0923 13:33:04.879125    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:06.761214    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:06.762254    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:06.762254    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:08.978693    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:08.979536    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:08.984918    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:08.985559    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:08.985559    7084 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-560300 && echo "multinode-560300" | sudo tee /etc/hostname
	I0923 13:33:09.142948    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-560300
	
	I0923 13:33:09.142948    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:11.001229    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:11.001229    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:11.001320    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:13.226165    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:13.227061    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:13.231080    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:13.231131    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:13.231131    7084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-560300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-560300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-560300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:33:13.373260    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:33:13.373260    7084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 13:33:13.373260    7084 buildroot.go:174] setting up certificates
	I0923 13:33:13.373260    7084 provision.go:84] configureAuth start
	I0923 13:33:13.373260    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:15.201988    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:15.201988    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:15.202342    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:17.402278    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:17.402278    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:17.402871    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:19.300327    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:19.300327    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:19.300327    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:21.573859    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:21.573859    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:21.574420    7084 provision.go:143] copyHostCerts
	I0923 13:33:21.574420    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 13:33:21.574420    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 13:33:21.574420    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 13:33:21.575010    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 13:33:21.576201    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 13:33:21.576262    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 13:33:21.576262    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 13:33:21.576262    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 13:33:21.576979    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 13:33:21.576979    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 13:33:21.577521    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 13:33:21.577701    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 13:33:21.578304    7084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-560300 san=[127.0.0.1 172.19.156.56 localhost minikube multinode-560300]
	I0923 13:33:21.692877    7084 provision.go:177] copyRemoteCerts
	I0923 13:33:21.702196    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:33:21.702196    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:23.560049    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:23.560049    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:23.560049    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:25.800955    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:25.800955    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:25.801923    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:33:25.914863    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2123829s)
	I0923 13:33:25.914863    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 13:33:25.916857    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:33:25.961787    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 13:33:25.962144    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 13:33:26.012256    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 13:33:26.012899    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:33:26.052950    7084 provision.go:87] duration metric: took 12.6788336s to configureAuth
	I0923 13:33:26.052950    7084 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:33:26.054594    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:33:26.054594    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:27.926522    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:27.926522    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:27.926827    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:30.174752    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:30.174752    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:30.178960    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:30.179481    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:30.179481    7084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:33:30.318782    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 13:33:30.318782    7084 buildroot.go:70] root file system type: tmpfs
	I0923 13:33:30.319162    7084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:33:30.319195    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:32.120519    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:32.120519    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:32.121014    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:34.386700    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:34.386700    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:34.390685    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:34.390751    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:34.390751    7084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:33:34.546922    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:33:34.547036    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:36.447946    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:36.447946    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:36.448039    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:38.660754    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:38.660754    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:38.664208    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:38.664887    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:38.664887    7084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:33:41.039207    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 13:33:41.039207    7084 machine.go:96] duration metric: took 40.4500041s to provisionDockerMachine
	I0923 13:33:41.039207    7084 start.go:293] postStartSetup for "multinode-560300" (driver="hyperv")
	I0923 13:33:41.039207    7084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:33:41.051200    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:33:41.051200    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:42.891677    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:42.891677    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:42.891677    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:45.148609    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:45.149694    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:45.150450    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:33:45.264997    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2135117s)
	I0923 13:33:45.275085    7084 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:33:45.284037    7084 command_runner.go:130] > NAME=Buildroot
	I0923 13:33:45.284037    7084 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:33:45.284037    7084 command_runner.go:130] > ID=buildroot
	I0923 13:33:45.284037    7084 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:33:45.284037    7084 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:33:45.284037    7084 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:33:45.284037    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 13:33:45.285024    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 13:33:45.285836    7084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 13:33:45.285881    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 13:33:45.294676    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:33:45.316241    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 13:33:45.358064    7084 start.go:296] duration metric: took 4.3185041s for postStartSetup
	I0923 13:33:45.358064    7084 fix.go:56] duration metric: took 1m18.7457376s for fixHost
	I0923 13:33:45.358209    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:47.220169    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:47.220169    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:47.220169    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:49.411952    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:49.411952    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:49.416513    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:49.417187    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:49.417187    7084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:33:49.542819    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098429.758532825
	
	I0923 13:33:49.542863    7084 fix.go:216] guest clock: 1727098429.758532825
	I0923 13:33:49.542950    7084 fix.go:229] Guest: 2024-09-23 13:33:49.758532825 +0000 UTC Remote: 2024-09-23 13:33:45.3580642 +0000 UTC m=+84.351991701 (delta=4.400468625s)
	I0923 13:33:49.543061    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:51.404131    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:51.404131    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:51.404349    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:53.636941    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:53.636993    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:53.641109    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:33:53.641722    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.156.56 22 <nil> <nil>}
	I0923 13:33:53.641722    7084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727098429
	I0923 13:33:53.786596    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 13:33:49 UTC 2024
	
	I0923 13:33:53.786596    7084 fix.go:236] clock set: Mon Sep 23 13:33:49 UTC 2024
	 (err=<nil>)
	I0923 13:33:53.786596    7084 start.go:83] releasing machines lock for "multinode-560300", held for 1m27.1737011s
	I0923 13:33:53.787741    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:55.645266    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:55.645266    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:55.645915    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:57.882612    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:33:57.883276    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:57.887510    7084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 13:33:57.887783    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:57.897674    7084 ssh_runner.go:195] Run: cat /version.json
	I0923 13:33:57.897674    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:33:59.833368    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:59.833368    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:59.834455    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:33:59.835406    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:33:59.835579    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:33:59.835658    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:34:02.218651    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:34:02.218651    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:02.219006    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:34:02.250859    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:34:02.250859    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:02.252002    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:34:02.312549    7084 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 13:34:02.312549    7084 ssh_runner.go:235] Completed: cat /version.json: (4.4145776s)
	I0923 13:34:02.321365    7084 ssh_runner.go:195] Run: systemctl --version
	I0923 13:34:02.326814    7084 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 13:34:02.326923    7084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.439064s)
	W0923 13:34:02.327010    7084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 13:34:02.336422    7084 command_runner.go:130] > systemd 252 (252)
	I0923 13:34:02.336422    7084 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 13:34:02.345505    7084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:34:02.356355    7084 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 13:34:02.356462    7084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:34:02.364924    7084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:34:02.392725    7084 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 13:34:02.392725    7084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:34:02.392725    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:34:02.392725    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:34:02.427122    7084 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 13:34:02.438070    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0923 13:34:02.453604    7084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 13:34:02.453604    7084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 13:34:02.468493    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 13:34:02.487256    7084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:34:02.498433    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:34:02.525577    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:34:02.551661    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:34:02.581018    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:34:02.607714    7084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:34:02.637144    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:34:02.662769    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:34:02.691865    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:34:02.719612    7084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:34:02.735756    7084 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:34:02.735831    7084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:34:02.743936    7084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:34:02.772496    7084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:34:02.799629    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:02.996275    7084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:34:03.027524    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:34:03.038085    7084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:34:03.055051    7084 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 13:34:03.055051    7084 command_runner.go:130] > [Unit]
	I0923 13:34:03.055051    7084 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 13:34:03.055051    7084 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 13:34:03.055051    7084 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 13:34:03.055051    7084 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 13:34:03.055051    7084 command_runner.go:130] > StartLimitBurst=3
	I0923 13:34:03.055051    7084 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 13:34:03.055051    7084 command_runner.go:130] > [Service]
	I0923 13:34:03.055051    7084 command_runner.go:130] > Type=notify
	I0923 13:34:03.055051    7084 command_runner.go:130] > Restart=on-failure
	I0923 13:34:03.055051    7084 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 13:34:03.055051    7084 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 13:34:03.055051    7084 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 13:34:03.055051    7084 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 13:34:03.055051    7084 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 13:34:03.055051    7084 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 13:34:03.055051    7084 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 13:34:03.055051    7084 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 13:34:03.055051    7084 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 13:34:03.055051    7084 command_runner.go:130] > ExecStart=
	I0923 13:34:03.055051    7084 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 13:34:03.055051    7084 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 13:34:03.055051    7084 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 13:34:03.055051    7084 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 13:34:03.055051    7084 command_runner.go:130] > LimitNOFILE=infinity
	I0923 13:34:03.055051    7084 command_runner.go:130] > LimitNPROC=infinity
	I0923 13:34:03.055051    7084 command_runner.go:130] > LimitCORE=infinity
	I0923 13:34:03.055051    7084 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 13:34:03.055051    7084 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 13:34:03.055051    7084 command_runner.go:130] > TasksMax=infinity
	I0923 13:34:03.055051    7084 command_runner.go:130] > TimeoutStartSec=0
	I0923 13:34:03.055051    7084 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 13:34:03.055051    7084 command_runner.go:130] > Delegate=yes
	I0923 13:34:03.055051    7084 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 13:34:03.055051    7084 command_runner.go:130] > KillMode=process
	I0923 13:34:03.055051    7084 command_runner.go:130] > [Install]
	I0923 13:34:03.055051    7084 command_runner.go:130] > WantedBy=multi-user.target
	I0923 13:34:03.063456    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:34:03.094077    7084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:34:03.127259    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:34:03.158977    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:34:03.195325    7084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:34:03.257774    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:34:03.279520    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:34:03.314698    7084 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 13:34:03.327269    7084 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:34:03.332386    7084 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 13:34:03.342589    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:34:03.358220    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:34:03.394219    7084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:34:03.563399    7084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:34:03.729091    7084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:34:03.729434    7084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:34:03.767051    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:03.929222    7084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:34:06.586938    7084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6574266s)
	I0923 13:34:06.597697    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:34:06.630170    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:34:06.664893    7084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:34:06.871108    7084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:34:07.053776    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:07.237240    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:34:07.287988    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:34:07.320747    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:07.518626    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:34:07.614002    7084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:34:07.624818    7084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:34:07.632612    7084 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 13:34:07.632675    7084 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:34:07.632675    7084 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0923 13:34:07.632675    7084 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 13:34:07.632675    7084 command_runner.go:130] > Access: 2024-09-23 13:34:07.759485353 +0000
	I0923 13:34:07.632790    7084 command_runner.go:130] > Modify: 2024-09-23 13:34:07.759485353 +0000
	I0923 13:34:07.632790    7084 command_runner.go:130] > Change: 2024-09-23 13:34:07.762485770 +0000
	I0923 13:34:07.632847    7084 command_runner.go:130] >  Birth: -
	I0923 13:34:07.632847    7084 start.go:563] Will wait 60s for crictl version
	I0923 13:34:07.643244    7084 ssh_runner.go:195] Run: which crictl
	I0923 13:34:07.649449    7084 command_runner.go:130] > /usr/bin/crictl
	I0923 13:34:07.656733    7084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:34:07.702506    7084 command_runner.go:130] > Version:  0.1.0
	I0923 13:34:07.702506    7084 command_runner.go:130] > RuntimeName:  docker
	I0923 13:34:07.702506    7084 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 13:34:07.702506    7084 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:34:07.704163    7084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:34:07.713326    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:34:07.743233    7084 command_runner.go:130] > 27.3.0
	I0923 13:34:07.752143    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:34:07.781605    7084 command_runner.go:130] > 27.3.0
	I0923 13:34:07.785549    7084 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:34:07.785711    7084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 13:34:07.790401    7084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 13:34:07.791351    7084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 13:34:07.791351    7084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 13:34:07.791351    7084 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 13:34:07.793230    7084 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 13:34:07.793230    7084 ip.go:214] interface addr: 172.19.144.1/20
	I0923 13:34:07.802035    7084 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 13:34:07.807457    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:34:07.827413    7084 kubeadm.go:883] updating cluster {Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 13:34:07.827693    7084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:34:07.835009    7084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 13:34:07.858817    7084 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 13:34:07.859026    7084 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 13:34:07.859026    7084 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 13:34:07.859084    7084 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:34:07.859084    7084 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0923 13:34:07.859137    7084 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0923 13:34:07.859137    7084 docker.go:615] Images already preloaded, skipping extraction
	I0923 13:34:07.866287    7084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 13:34:07.888562    7084 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 13:34:07.888562    7084 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 13:34:07.888562    7084 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 13:34:07.888562    7084 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0923 13:34:07.889701    7084 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0923 13:34:07.889780    7084 cache_images.go:84] Images are preloaded, skipping loading
	I0923 13:34:07.889780    7084 kubeadm.go:934] updating node { 172.19.156.56 8443 v1.31.1 docker true true} ...
	I0923 13:34:07.890121    7084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-560300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.156.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:34:07.896568    7084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 13:34:07.954477    7084 command_runner.go:130] > cgroupfs
	I0923 13:34:07.954740    7084 cni.go:84] Creating CNI manager for ""
	I0923 13:34:07.954826    7084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:34:07.954826    7084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 13:34:07.954826    7084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.156.56 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-560300 NodeName:multinode-560300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.156.56"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.156.56 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 13:34:07.954826    7084 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.156.56
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-560300"
	  kubeletExtraArgs:
	    node-ip: 172.19.156.56
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.156.56"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 13:34:07.966573    7084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:34:07.986005    7084 command_runner.go:130] > kubeadm
	I0923 13:34:07.986061    7084 command_runner.go:130] > kubectl
	I0923 13:34:07.986061    7084 command_runner.go:130] > kubelet
	I0923 13:34:07.986098    7084 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:34:07.997622    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 13:34:08.014362    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0923 13:34:08.045056    7084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:34:08.078517    7084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0923 13:34:08.125522    7084 ssh_runner.go:195] Run: grep 172.19.156.56	control-plane.minikube.internal$ /etc/hosts
	I0923 13:34:08.132791    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.156.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:34:08.162805    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:08.348528    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:34:08.374828    7084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300 for IP: 172.19.156.56
	I0923 13:34:08.374909    7084 certs.go:194] generating shared ca certs ...
	I0923 13:34:08.374979    7084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:08.375957    7084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 13:34:08.376469    7084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 13:34:08.376793    7084 certs.go:256] generating profile certs ...
	I0923 13:34:08.377535    7084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\client.key
	I0923 13:34:08.377685    7084 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.970a6c31
	I0923 13:34:08.377827    7084 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.970a6c31 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.156.56]
	I0923 13:34:08.789088    7084 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.970a6c31 ...
	I0923 13:34:08.789088    7084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.970a6c31: {Name:mk8a3149834e23c491bffc14de1904277923a2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:08.791190    7084 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.970a6c31 ...
	I0923 13:34:08.791190    7084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.970a6c31: {Name:mk5029a77e212f26c295dbd92ef64b74432c8110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:08.792674    7084 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt.970a6c31 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt
	I0923 13:34:08.804212    7084 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key.970a6c31 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key
	I0923 13:34:08.805206    7084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key
	I0923 13:34:08.805206    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:34:08.805545    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:34:08.805688    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:34:08.805688    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:34:08.805688    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 13:34:08.806377    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 13:34:08.806694    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 13:34:08.806833    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 13:34:08.807020    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 13:34:08.807399    7084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 13:34:08.807399    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 13:34:08.807738    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 13:34:08.807887    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 13:34:08.807887    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 13:34:08.808415    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 13:34:08.808558    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:08.808704    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 13:34:08.808704    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 13:34:08.809339    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:34:08.859222    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 13:34:08.902815    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:34:08.946297    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:34:08.989564    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 13:34:09.032968    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 13:34:09.077214    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 13:34:09.121073    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 13:34:09.166875    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:34:09.212415    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 13:34:09.252552    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 13:34:09.286282    7084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 13:34:09.322523    7084 ssh_runner.go:195] Run: openssl version
	I0923 13:34:09.330950    7084 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:34:09.343533    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:34:09.370858    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:09.376858    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:09.376858    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:09.385636    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:34:09.394363    7084 command_runner.go:130] > b5213941
	I0923 13:34:09.402497    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:34:09.429740    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 13:34:09.455218    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 13:34:09.463663    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:34:09.463663    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:34:09.472320    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 13:34:09.479165    7084 command_runner.go:130] > 51391683
	I0923 13:34:09.488860    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 13:34:09.514789    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 13:34:09.539787    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 13:34:09.546758    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:34:09.546758    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:34:09.554466    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 13:34:09.561466    7084 command_runner.go:130] > 3ec20f2e
	I0923 13:34:09.569467    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:34:09.597930    7084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:34:09.604524    7084 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:34:09.604524    7084 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 13:34:09.604524    7084 command_runner.go:130] > Device: 8,1	Inode: 4194087     Links: 1
	I0923 13:34:09.604524    7084 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:34:09.604524    7084 command_runner.go:130] > Access: 2024-09-23 13:12:43.705183234 +0000
	I0923 13:34:09.604524    7084 command_runner.go:130] > Modify: 2024-09-23 13:12:43.705183234 +0000
	I0923 13:34:09.604524    7084 command_runner.go:130] > Change: 2024-09-23 13:12:43.705183234 +0000
	I0923 13:34:09.604524    7084 command_runner.go:130] >  Birth: 2024-09-23 13:12:43.705183234 +0000
	I0923 13:34:09.612933    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 13:34:09.621357    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.629009    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 13:34:09.637792    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.645891    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 13:34:09.655333    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.663552    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 13:34:09.672463    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.680531    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 13:34:09.689451    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.697535    7084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 13:34:09.706286    7084 command_runner.go:130] > Certificate will not expire
	I0923 13:34:09.706483    7084 kubeadm.go:392] StartCluster: {Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.68 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:34:09.717138    7084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 13:34:09.749508    7084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 13:34:09.765836    7084 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0923 13:34:09.765836    7084 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0923 13:34:09.765836    7084 command_runner.go:130] > /var/lib/minikube/etcd:
	I0923 13:34:09.765836    7084 command_runner.go:130] > member
	I0923 13:34:09.765836    7084 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 13:34:09.765836    7084 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 13:34:09.773606    7084 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 13:34:09.790325    7084 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:34:09.791618    7084 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-560300" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:34:09.792623    7084 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-560300" cluster setting kubeconfig missing "multinode-560300" context setting]
	I0923 13:34:09.794624    7084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:09.810681    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:34:09.811285    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:34:09.812469    7084 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 13:34:09.820220    7084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 13:34:09.835787    7084 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0923 13:34:09.835787    7084 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0923 13:34:09.835787    7084 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0923 13:34:09.835787    7084 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0923 13:34:09.836739    7084 command_runner.go:130] >  kind: InitConfiguration
	I0923 13:34:09.836739    7084 command_runner.go:130] >  localAPIEndpoint:
	I0923 13:34:09.836739    7084 command_runner.go:130] > -  advertiseAddress: 172.19.153.215
	I0923 13:34:09.836739    7084 command_runner.go:130] > +  advertiseAddress: 172.19.156.56
	I0923 13:34:09.836788    7084 command_runner.go:130] >    bindPort: 8443
	I0923 13:34:09.836788    7084 command_runner.go:130] >  bootstrapTokens:
	I0923 13:34:09.836788    7084 command_runner.go:130] >    - groups:
	I0923 13:34:09.836788    7084 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0923 13:34:09.836788    7084 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0923 13:34:09.836788    7084 command_runner.go:130] >    name: "multinode-560300"
	I0923 13:34:09.836788    7084 command_runner.go:130] >    kubeletExtraArgs:
	I0923 13:34:09.836788    7084 command_runner.go:130] > -    node-ip: 172.19.153.215
	I0923 13:34:09.836788    7084 command_runner.go:130] > +    node-ip: 172.19.156.56
	I0923 13:34:09.836788    7084 command_runner.go:130] >    taints: []
	I0923 13:34:09.836893    7084 command_runner.go:130] >  ---
	I0923 13:34:09.836893    7084 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0923 13:34:09.836893    7084 command_runner.go:130] >  kind: ClusterConfiguration
	I0923 13:34:09.836893    7084 command_runner.go:130] >  apiServer:
	I0923 13:34:09.836964    7084 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.153.215"]
	I0923 13:34:09.836964    7084 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.156.56"]
	I0923 13:34:09.836964    7084 command_runner.go:130] >    extraArgs:
	I0923 13:34:09.836964    7084 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0923 13:34:09.837049    7084 command_runner.go:130] >  controllerManager:
	I0923 13:34:09.837076    7084 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.153.215
	+  advertiseAddress: 172.19.156.56
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-560300"
	   kubeletExtraArgs:
	-    node-ip: 172.19.153.215
	+    node-ip: 172.19.156.56
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.153.215"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.156.56"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0923 13:34:09.837076    7084 kubeadm.go:1160] stopping kube-system containers ...
	I0923 13:34:09.842735    7084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 13:34:09.870308    7084 command_runner.go:130] > 648460d0f31f
	I0923 13:34:09.870350    7084 command_runner.go:130] > b07ca5858154
	I0923 13:34:09.870350    7084 command_runner.go:130] > eb12eb8fe1ea
	I0923 13:34:09.870350    7084 command_runner.go:130] > 544604cdd801
	I0923 13:34:09.870350    7084 command_runner.go:130] > a83589d1098a
	I0923 13:34:09.870350    7084 command_runner.go:130] > c92a84f5caf2
	I0923 13:34:09.870350    7084 command_runner.go:130] > cf2fc1e61774
	I0923 13:34:09.870350    7084 command_runner.go:130] > 0f322d00a55b
	I0923 13:34:09.870350    7084 command_runner.go:130] > 90116ded443d
	I0923 13:34:09.870350    7084 command_runner.go:130] > 117d706d07d2
	I0923 13:34:09.870350    7084 command_runner.go:130] > 03ce0954301e
	I0923 13:34:09.870511    7084 command_runner.go:130] > 8ab41eeaea91
	I0923 13:34:09.870579    7084 command_runner.go:130] > 7c23acc78f4c
	I0923 13:34:09.870579    7084 command_runner.go:130] > 67b7e79ad6b5
	I0923 13:34:09.870579    7084 command_runner.go:130] > b160f7a7a5d2
	I0923 13:34:09.870579    7084 command_runner.go:130] > 6ef47416b046
	I0923 13:34:09.870672    7084 docker.go:483] Stopping containers: [648460d0f31f b07ca5858154 eb12eb8fe1ea 544604cdd801 a83589d1098a c92a84f5caf2 cf2fc1e61774 0f322d00a55b 90116ded443d 117d706d07d2 03ce0954301e 8ab41eeaea91 7c23acc78f4c 67b7e79ad6b5 b160f7a7a5d2 6ef47416b046]
	I0923 13:34:09.879657    7084 ssh_runner.go:195] Run: docker stop 648460d0f31f b07ca5858154 eb12eb8fe1ea 544604cdd801 a83589d1098a c92a84f5caf2 cf2fc1e61774 0f322d00a55b 90116ded443d 117d706d07d2 03ce0954301e 8ab41eeaea91 7c23acc78f4c 67b7e79ad6b5 b160f7a7a5d2 6ef47416b046
	I0923 13:34:09.905040    7084 command_runner.go:130] > 648460d0f31f
	I0923 13:34:09.905295    7084 command_runner.go:130] > b07ca5858154
	I0923 13:34:09.905295    7084 command_runner.go:130] > eb12eb8fe1ea
	I0923 13:34:09.905295    7084 command_runner.go:130] > 544604cdd801
	I0923 13:34:09.905295    7084 command_runner.go:130] > a83589d1098a
	I0923 13:34:09.905295    7084 command_runner.go:130] > c92a84f5caf2
	I0923 13:34:09.905295    7084 command_runner.go:130] > cf2fc1e61774
	I0923 13:34:09.905295    7084 command_runner.go:130] > 0f322d00a55b
	I0923 13:34:09.905295    7084 command_runner.go:130] > 90116ded443d
	I0923 13:34:09.905295    7084 command_runner.go:130] > 117d706d07d2
	I0923 13:34:09.905295    7084 command_runner.go:130] > 03ce0954301e
	I0923 13:34:09.905295    7084 command_runner.go:130] > 8ab41eeaea91
	I0923 13:34:09.905295    7084 command_runner.go:130] > 7c23acc78f4c
	I0923 13:34:09.905295    7084 command_runner.go:130] > 67b7e79ad6b5
	I0923 13:34:09.905295    7084 command_runner.go:130] > b160f7a7a5d2
	I0923 13:34:09.905295    7084 command_runner.go:130] > 6ef47416b046
	I0923 13:34:09.914434    7084 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 13:34:09.955877    7084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 13:34:09.972508    7084 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0923 13:34:09.972625    7084 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0923 13:34:09.972755    7084 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0923 13:34:09.972823    7084 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:34:09.972895    7084 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 13:34:09.972976    7084 kubeadm.go:157] found existing configuration files:
	
	I0923 13:34:09.983623    7084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 13:34:09.999345    7084 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:34:09.999345    7084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 13:34:10.007696    7084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 13:34:10.035610    7084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 13:34:10.053684    7084 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:34:10.053754    7084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 13:34:10.062195    7084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 13:34:10.086608    7084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 13:34:10.102235    7084 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:34:10.102235    7084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 13:34:10.110095    7084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 13:34:10.135038    7084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 13:34:10.150701    7084 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:34:10.150701    7084 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 13:34:10.159668    7084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 13:34:10.183290    7084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 13:34:10.199857    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:10.390619    7084 command_runner.go:130] ! W0923 13:34:10.608971    1591 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:10.391440    7084 command_runner.go:130] ! W0923 13:34:10.610043    1591 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 13:34:10.402424    7084 command_runner.go:130] > [certs] Using the existing "sa" key
	I0923 13:34:10.402424    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:10.458239    7084 command_runner.go:130] ! W0923 13:34:10.677109    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:10.459404    7084 command_runner.go:130] ! W0923 13:34:10.677844    1596 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 13:34:12.312082    7084 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 13:34:12.312082    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9095285s)
	I0923 13:34:12.312082    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:12.368439    7084 command_runner.go:130] ! W0923 13:34:12.587045    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.369369    7084 command_runner.go:130] ! W0923 13:34:12.588099    1601 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.571209    7084 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:34:12.571302    7084 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:34:12.571379    7084 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 13:34:12.571450    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:12.634265    7084 command_runner.go:130] ! W0923 13:34:12.852327    1629 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.635062    7084 command_runner.go:130] ! W0923 13:34:12.853020    1629 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.655266    7084 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 13:34:12.655266    7084 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 13:34:12.655266    7084 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 13:34:12.655266    7084 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 13:34:12.655266    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:12.741853    7084 command_runner.go:130] ! W0923 13:34:12.960044    1636 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.742145    7084 command_runner.go:130] ! W0923 13:34:12.960940    1636 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:12.766906    7084 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 13:34:12.767012    7084 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:34:12.776277    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:13.279799    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:13.779079    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:14.278375    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:14.778571    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:14.802648    7084 command_runner.go:130] > 1960
	I0923 13:34:14.802727    7084 api_server.go:72] duration metric: took 2.0355779s to wait for apiserver process to appear ...
	I0923 13:34:14.802727    7084 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:34:14.802828    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:17.720157    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 13:34:17.720300    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 13:34:17.720300    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:17.850795    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:34:17.850795    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:34:17.850795    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:17.859222    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:34:17.859290    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:34:18.303501    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:18.312344    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:34:18.312389    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:34:18.803361    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:18.825418    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 13:34:18.825418    7084 api_server.go:103] status: https://172.19.156.56:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 13:34:19.303692    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:19.315167    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 200:
	ok
	I0923 13:34:19.316401    7084 round_trippers.go:463] GET https://172.19.156.56:8443/version
	I0923 13:34:19.316401    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:19.316401    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:19.316401    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:19.328710    7084 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0923 13:34:19.328710    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:19.328710    7084 round_trippers.go:580]     Audit-Id: f4401bf3-2600-430b-8f13-521935b5c441
	I0923 13:34:19.328710    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:19.328778    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:19.328778    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:19.328778    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:19.328778    7084 round_trippers.go:580]     Content-Length: 263
	I0923 13:34:19.328778    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:19 GMT
	I0923 13:34:19.328838    7084 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 13:34:19.328997    7084 api_server.go:141] control plane version: v1.31.1
	I0923 13:34:19.329096    7084 api_server.go:131] duration metric: took 4.5260638s to wait for apiserver health ...
	I0923 13:34:19.329096    7084 cni.go:84] Creating CNI manager for ""
	I0923 13:34:19.329096    7084 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0923 13:34:19.333287    7084 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 13:34:19.345878    7084 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 13:34:19.356777    7084 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0923 13:34:19.356899    7084 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0923 13:34:19.356899    7084 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0923 13:34:19.356899    7084 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 13:34:19.356899    7084 command_runner.go:130] > Access: 2024-09-23 13:32:56.102387400 +0000
	I0923 13:34:19.356899    7084 command_runner.go:130] > Modify: 2024-09-20 04:01:25.000000000 +0000
	I0923 13:34:19.356899    7084 command_runner.go:130] > Change: 2024-09-23 13:32:44.533000000 +0000
	I0923 13:34:19.356981    7084 command_runner.go:130] >  Birth: -
	I0923 13:34:19.357113    7084 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 13:34:19.357159    7084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 13:34:19.407720    7084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 13:34:20.589955    7084 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0923 13:34:20.590048    7084 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0923 13:34:20.590048    7084 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0923 13:34:20.590048    7084 command_runner.go:130] > daemonset.apps/kindnet configured
	I0923 13:34:20.590048    7084 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1822476s)
	I0923 13:34:20.590115    7084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:34:20.590265    7084 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 13:34:20.590265    7084 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 13:34:20.590406    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:20.590406    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:20.590406    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:20.590406    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:20.595684    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:20.595684    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:20.595684    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:20.595684    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:20 GMT
	I0923 13:34:20.595684    7084 round_trippers.go:580]     Audit-Id: 7c5555f4-a150-442e-9746-93fbae5f2377
	I0923 13:34:20.595684    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:20.595684    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:20.595684    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:20.596674    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1779"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91382 chars]
	I0923 13:34:20.602680    7084 system_pods.go:59] 12 kube-system pods found
	I0923 13:34:20.602680    7084 system_pods.go:61] "coredns-7c65d6cfc9-glx94" [f476c8f8-667a-48d4-84f8-4aa15336cea9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 13:34:20.602680    7084 system_pods.go:61] "etcd-multinode-560300" [477ee4f5-e333-4042-97cd-8457f60fd696] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kindnet-mdnmc" [ffaf3266-f3b8-424f-888b-15aff927de53] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kindnet-qg99z" [0f714fff-dd9b-4ba3-b2e9-6e9e18f21ae9] Running
	I0923 13:34:20.602680    7084 system_pods.go:61] "kindnet-z9mrc" [c9dfa12e-54ef-4d0b-825e-bcbcaa77b5a9] Running
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-apiserver-multinode-560300" [c88cb5c4-fe30-4354-bf55-1f281cf11190] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-controller-manager-multinode-560300" [aa0d358b-19fd-4553-8a34-f772ba945019] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-proxy-dbkdp" [44a5a18e-0e93-4293-8d4b-13e3ec9acfef] Running
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-proxy-g5t97" [49d7601a-bda4-421e-bde7-acc35c157962] Running
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-proxy-rgmcw" [97050e09-6fc3-4e7b-b00e-07eb9332bf15] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0923 13:34:20.602680    7084 system_pods.go:61] "kube-scheduler-multinode-560300" [01e5d6a3-2eb6-4fa4-8607-072724fb2880] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0923 13:34:20.602680    7084 system_pods.go:61] "storage-provisioner" [444d1029-f19d-4fa6-b454-c9c710e6d9b2] Running
	I0923 13:34:20.602680    7084 system_pods.go:74] duration metric: took 12.5642ms to wait for pod list to return data ...
	I0923 13:34:20.602680    7084 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:34:20.602680    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes
	I0923 13:34:20.602680    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:20.602680    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:20.602680    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:20.610725    7084 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 13:34:20.610725    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:20.610725    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:20.610725    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:20 GMT
	I0923 13:34:20.610725    7084 round_trippers.go:580]     Audit-Id: fcfb7d35-9971-4e1f-9c0e-03a15651ea9b
	I0923 13:34:20.610725    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:20.610725    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:20.610725    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:20.610725    7084 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1779"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16289 chars]
	I0923 13:34:20.611693    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:20.611693    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:20.611693    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:20.611693    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:20.611693    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:20.611693    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:20.611693    7084 node_conditions.go:105] duration metric: took 9.013ms to run NodePressure ...
	I0923 13:34:20.611693    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 13:34:20.670713    7084 command_runner.go:130] ! W0923 13:34:20.889559    2282 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:20.678678    7084 command_runner.go:130] ! W0923 13:34:20.898671    2282 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 13:34:20.997787    7084 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0923 13:34:20.997864    7084 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0923 13:34:20.997987    7084 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0923 13:34:20.998045    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0923 13:34:20.998045    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:20.998045    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:20.998045    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.012671    7084 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 13:34:21.013069    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.013141    7084 round_trippers.go:580]     Audit-Id: 893affd4-36f4-46ab-8603-701e7a588ba9
	I0923 13:34:21.013141    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.013141    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.013141    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.013206    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.013269    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.018681    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1782"},"items":[{"metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1775","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31322 chars]
	I0923 13:34:21.021273    7084 kubeadm.go:739] kubelet initialised
	I0923 13:34:21.021333    7084 kubeadm.go:740] duration metric: took 23.3019ms waiting for restarted kubelet to initialise ...
	I0923 13:34:21.021333    7084 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:34:21.021547    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:21.021614    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.021642    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.021642    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.039229    7084 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0923 13:34:21.039698    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.039698    7084 round_trippers.go:580]     Audit-Id: 58f6b66c-77ba-474f-aacc-6d84054438d3
	I0923 13:34:21.039698    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.039698    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.039698    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.039698    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.039783    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.041330    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1782"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91382 chars]
	I0923 13:34:21.044663    7084 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.045257    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:21.045257    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.045307    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.045307    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.049912    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:21.049912    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.049912    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.049912    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.049912    7084 round_trippers.go:580]     Audit-Id: ad3bc9e9-21b6-4469-aaf9-2a8956d5985e
	I0923 13:34:21.049912    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.049912    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.049912    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.049912    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:21.050914    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:21.050914    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.050914    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.050914    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.055920    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:21.055920    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.056830    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.056830    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.056830    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.056830    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.056830    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.056830    7084 round_trippers.go:580]     Audit-Id: e350e5a2-6bde-4eb9-9cff-d6ded1f94674
	I0923 13:34:21.057146    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0923 13:34:21.057608    7084 pod_ready.go:98] node "multinode-560300" hosting pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.057667    7084 pod_ready.go:82] duration metric: took 13.0035ms for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.057667    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.057667    7084 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.057790    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:34:21.057790    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.057790    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.057790    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.060470    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:21.061255    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.061255    7084 round_trippers.go:580]     Audit-Id: 877d5fc1-3787-4e99-b107-8133e04979ea
	I0923 13:34:21.061255    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.061255    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.061255    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.061255    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.061255    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.061364    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1775","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6830 chars]
	I0923 13:34:21.062017    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:21.062081    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.062081    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.062081    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.064461    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:21.064461    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.064461    7084 round_trippers.go:580]     Audit-Id: d0e5fd5b-ab0a-4729-b4e7-e69a10b6923a
	I0923 13:34:21.064461    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.064461    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.064461    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.064461    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.064461    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.065307    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0923 13:34:21.065748    7084 pod_ready.go:98] node "multinode-560300" hosting pod "etcd-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.065748    7084 pod_ready.go:82] duration metric: took 8.0801ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.065748    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "etcd-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.065748    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.065926    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:34:21.065926    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.066104    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.066104    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.068445    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:21.068445    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.068445    7084 round_trippers.go:580]     Audit-Id: 4e5f9635-0f25-43c8-966b-7b5a2969e11e
	I0923 13:34:21.068445    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.068445    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.068445    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.068445    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.068445    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.069225    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"c88cb5c4-fe30-4354-bf55-1f281cf11190","resourceVersion":"1776","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.156.56:8443","kubernetes.io/config.hash":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.mirror":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.seen":"2024-09-23T13:34:12.942044692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8283 chars]
	I0923 13:34:21.069716    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:21.069779    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.069779    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.069779    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.076082    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:34:21.076172    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.076172    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.076172    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.076172    7084 round_trippers.go:580]     Audit-Id: 1255e212-e963-4415-b94d-4512ffb7dc44
	I0923 13:34:21.076172    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.076172    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.076172    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.076383    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0923 13:34:21.076778    7084 pod_ready.go:98] node "multinode-560300" hosting pod "kube-apiserver-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.076848    7084 pod_ready.go:82] duration metric: took 11.0278ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.076848    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "kube-apiserver-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.076848    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.076977    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:34:21.076977    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.076977    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.076977    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.085480    7084 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 13:34:21.085480    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.085480    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.085480    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.086482    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.086482    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.086506    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.086506    7084 round_trippers.go:580]     Audit-Id: 68f027ce-5b89-4a3e-a19c-f1bb9577d529
	I0923 13:34:21.086770    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"1748","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0923 13:34:21.087335    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:21.087398    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.087398    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.087398    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.089477    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:21.089477    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.089477    7084 round_trippers.go:580]     Audit-Id: 10226438-f03f-49df-ba75-cc0f2be1bbfa
	I0923 13:34:21.089477    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.089477    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.089477    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.089477    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.089477    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.090461    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1701","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5364 chars]
	I0923 13:34:21.090461    7084 pod_ready.go:98] node "multinode-560300" hosting pod "kube-controller-manager-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.090461    7084 pod_ready.go:82] duration metric: took 13.6119ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.090461    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "kube-controller-manager-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:21.090461    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.198560    7084 request.go:632] Waited for 108.0917ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:34:21.198560    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:34:21.198560    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.198560    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.198560    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.202471    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:21.202471    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.202471    7084 round_trippers.go:580]     Audit-Id: 2144e5db-0a95-46ba-8dfb-ef817d4b8680
	I0923 13:34:21.202471    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.202471    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.202471    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.202471    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.202471    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.202471    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dbkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"44a5a18e-0e93-4293-8d4b-13e3ec9acfef","resourceVersion":"1660","creationTimestamp":"2024-09-23T13:20:08Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:20:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0923 13:34:21.398638    7084 request.go:632] Waited for 195.5607ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:34:21.398638    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:34:21.398638    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.398638    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.398638    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.401652    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:21.402035    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.402064    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.402064    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.402064    7084 round_trippers.go:580]     Audit-Id: 3305564d-0a54-49d5-b3ed-f3a6c11f843e
	I0923 13:34:21.402064    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.402064    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.402064    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.402321    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"781efd95-4e81-4850-a300-9cef56c5e6d4","resourceVersion":"1786","creationTimestamp":"2024-09-23T13:30:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_30_01_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:30:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4304 chars]
	I0923 13:34:21.403118    7084 pod_ready.go:98] node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:34:21.403197    7084 pod_ready.go:82] duration metric: took 312.6355ms for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.403197    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:34:21.403197    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.598657    7084 request.go:632] Waited for 195.294ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:34:21.598657    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:34:21.598657    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.598657    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.598657    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.612984    7084 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 13:34:21.612984    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.612984    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.612984    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.612984    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.612984    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:21 GMT
	I0923 13:34:21.612984    7084 round_trippers.go:580]     Audit-Id: 863a0912-361d-47e7-92e9-836ccee225ab
	I0923 13:34:21.612984    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.612984    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"1686","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I0923 13:34:21.799545    7084 request.go:632] Waited for 185.544ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:34:21.799545    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:34:21.799545    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.799545    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.799545    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:21.803275    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:21.803371    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:21.803371    7084 round_trippers.go:580]     Audit-Id: 9301a31b-7d6f-4384-b41a-c9f99186cd04
	I0923 13:34:21.803371    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:21.803371    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:21.803371    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:21.803371    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:21.803371    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:21.803958    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"1683","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4486 chars]
	I0923 13:34:21.804814    7084 pod_ready.go:98] node "multinode-560300-m02" hosting pod "kube-proxy-g5t97" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m02" has status "Ready":"Unknown"
	I0923 13:34:21.804893    7084 pod_ready.go:82] duration metric: took 401.5992ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:21.804893    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m02" hosting pod "kube-proxy-g5t97" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m02" has status "Ready":"Unknown"
	I0923 13:34:21.804893    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:21.998640    7084 request.go:632] Waited for 193.507ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:34:21.999165    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:34:21.999165    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:21.999165    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:21.999165    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.003139    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:22.003139    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.003139    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.003139    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.003139    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.003139    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:22.003139    7084 round_trippers.go:580]     Audit-Id: 18b23371-9762-40c5-9781-12dcc6fc34db
	I0923 13:34:22.003139    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.003290    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"1800","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0923 13:34:22.198360    7084 request.go:632] Waited for 194.3584ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.198360    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.198360    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:22.198360    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:22.198360    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.203104    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:22.203104    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.203104    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.203104    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.203104    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.203104    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.203104    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:22.203104    7084 round_trippers.go:580]     Audit-Id: b97249ec-a57e-43a1-9c1f-671676d3c95e
	I0923 13:34:22.203371    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:22.203912    7084 pod_ready.go:98] node "multinode-560300" hosting pod "kube-proxy-rgmcw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:22.203912    7084 pod_ready.go:82] duration metric: took 398.8778ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:22.203912    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "kube-proxy-rgmcw" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:22.203975    7084 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:22.398838    7084 request.go:632] Waited for 194.8496ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:34:22.398838    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:34:22.398838    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:22.398838    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:22.398838    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.402614    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:22.402684    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.402684    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.402746    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:22.402746    7084 round_trippers.go:580]     Audit-Id: 08e5c977-b589-466b-9f08-49f76c5594c2
	I0923 13:34:22.402746    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.402804    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.402804    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.403119    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"1747","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0923 13:34:22.598233    7084 request.go:632] Waited for 194.3677ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.598822    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.598822    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:22.598822    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:22.598822    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.602202    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:22.602202    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.602202    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:22 GMT
	I0923 13:34:22.602202    7084 round_trippers.go:580]     Audit-Id: f31ec490-6978-40af-a299-301a0b633e09
	I0923 13:34:22.602202    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.602202    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.602202    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.602202    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.602202    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:22.602806    7084 pod_ready.go:98] node "multinode-560300" hosting pod "kube-scheduler-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:22.602806    7084 pod_ready.go:82] duration metric: took 398.8039ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:22.602806    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300" hosting pod "kube-scheduler-multinode-560300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300" has status "Ready":"False"
	I0923 13:34:22.602806    7084 pod_ready.go:39] duration metric: took 1.5813021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:34:22.602806    7084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 13:34:22.623892    7084 command_runner.go:130] > -16
	I0923 13:34:22.623892    7084 ops.go:34] apiserver oom_adj: -16
	I0923 13:34:22.623892    7084 kubeadm.go:597] duration metric: took 12.8571878s to restartPrimaryControlPlane
	I0923 13:34:22.623892    7084 kubeadm.go:394] duration metric: took 12.9165376s to StartCluster
	I0923 13:34:22.623892    7084 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:22.623892    7084 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:34:22.626920    7084 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:34:22.628165    7084 start.go:235] Will wait 6m0s for node &{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 13:34:22.628165    7084 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 13:34:22.628825    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:34:22.633232    7084 out.go:177] * Enabled addons: 
	I0923 13:34:22.636903    7084 addons.go:510] duration metric: took 8.7368ms for enable addons: enabled=[]
	I0923 13:34:22.638851    7084 out.go:177] * Verifying Kubernetes components...
	I0923 13:34:22.651134    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:34:22.907023    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:34:22.942263    7084 node_ready.go:35] waiting up to 6m0s for node "multinode-560300" to be "Ready" ...
	I0923 13:34:22.942263    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:22.942263    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:22.942263    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:22.942263    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:22.946054    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:22.946054    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:22.946054    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:22.946132    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:22.946132    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:22.946132    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:23 GMT
	I0923 13:34:22.946132    7084 round_trippers.go:580]     Audit-Id: d85e60c0-8d36-41e1-965b-3b4bec1c420a
	I0923 13:34:22.946132    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:22.946349    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:23.442774    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:23.442774    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:23.442774    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:23.442774    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:23.447025    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:23.447025    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:23.447025    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:23.447025    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:23.447025    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:23.447025    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:23.447025    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:23 GMT
	I0923 13:34:23.447025    7084 round_trippers.go:580]     Audit-Id: 0ee239fa-384f-43d6-a803-9ad00153f5dc
	I0923 13:34:23.447509    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:23.942563    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:23.942563    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:23.942563    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:23.942563    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:23.946911    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:23.947005    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:23.947005    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:23.947005    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:23.947005    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:23.947005    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:24 GMT
	I0923 13:34:23.947005    7084 round_trippers.go:580]     Audit-Id: 15730503-fe0a-4d9a-b157-656b91aaa93c
	I0923 13:34:23.947005    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:23.947472    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:24.442802    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:24.442802    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:24.442802    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:24.442802    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:24.447524    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:24.447524    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:24.447648    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:24.447648    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:24.447648    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:24.447648    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:24.447648    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:24 GMT
	I0923 13:34:24.447648    7084 round_trippers.go:580]     Audit-Id: accdb84c-111d-4749-99c9-48a060b5841f
	I0923 13:34:24.448083    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:24.942658    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:24.942658    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:24.942658    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:24.942658    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:24.946965    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:24.947587    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:24.947587    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:24.947587    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:25 GMT
	I0923 13:34:24.947587    7084 round_trippers.go:580]     Audit-Id: 1d2d5195-09ac-4560-abd2-0f1413ac714a
	I0923 13:34:24.947587    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:24.947691    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:24.947691    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:24.948462    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:24.949162    7084 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:34:25.443557    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:25.443557    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:25.443557    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:25.443557    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:25.447086    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:25.447086    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:25.447086    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:25.447610    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:25.447610    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:25 GMT
	I0923 13:34:25.447610    7084 round_trippers.go:580]     Audit-Id: 98bf2518-e6ac-4434-8b0a-bc07f9a3f0c2
	I0923 13:34:25.447610    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:25.447610    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:25.448557    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:25.942977    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:25.942977    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:25.942977    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:25.942977    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:25.947683    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:25.947778    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:25.947778    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:25.947778    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:25.947778    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:25.947914    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:26 GMT
	I0923 13:34:25.947914    7084 round_trippers.go:580]     Audit-Id: 90577b39-0071-4325-8e08-df51e971b616
	I0923 13:34:25.947914    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:25.948147    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:26.443384    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:26.443384    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:26.443384    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:26.443384    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:26.447028    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:26.447028    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:26.447129    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:26.447129    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:26 GMT
	I0923 13:34:26.447129    7084 round_trippers.go:580]     Audit-Id: d58d6db9-8749-4b06-8d1c-7bbdad7daa27
	I0923 13:34:26.447129    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:26.447129    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:26.447129    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:26.447457    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:26.943065    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:26.943065    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:26.943065    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:26.943065    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:26.952535    7084 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 13:34:26.952593    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:26.952627    7084 round_trippers.go:580]     Audit-Id: 9d5b0777-e5df-4118-a57a-80bd877dde2f
	I0923 13:34:26.952649    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:26.952649    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:26.952649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:26.952649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:26.952649    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:27 GMT
	I0923 13:34:26.952649    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:26.953240    7084 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:34:27.443233    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:27.443233    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:27.443233    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:27.443233    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:27.446998    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:27.446998    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:27.446998    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:27.446998    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:27.446998    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:27 GMT
	I0923 13:34:27.446998    7084 round_trippers.go:580]     Audit-Id: a9fc73d9-f836-4d24-9ae4-9c06385e6563
	I0923 13:34:27.446998    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:27.446998    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:27.448440    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:27.944598    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:27.944598    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:27.944598    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:27.944598    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:27.948027    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:27.948027    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:27.948027    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:27.948027    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:27.948027    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:27.948027    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:27.948027    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:28 GMT
	I0923 13:34:27.948027    7084 round_trippers.go:580]     Audit-Id: 470a7736-33c7-4c45-9dea-9fd2138a7b85
	I0923 13:34:27.948241    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:28.442833    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:28.442833    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:28.442833    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:28.442833    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:28.447972    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:28.448064    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:28.448064    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:28.448064    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:28.448064    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:28.448064    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:28.448064    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:28 GMT
	I0923 13:34:28.448064    7084 round_trippers.go:580]     Audit-Id: 34a3aba2-a03d-47ef-a2bd-a86ae2838dd6
	I0923 13:34:28.448376    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:28.943461    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:28.943461    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:28.943461    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:28.943461    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:28.947533    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:28.947533    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:28.947533    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:28.947533    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:28.947533    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:29 GMT
	I0923 13:34:28.947533    7084 round_trippers.go:580]     Audit-Id: f5674216-47e4-417b-ae18-47041394862d
	I0923 13:34:28.947533    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:28.947533    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:28.947533    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:29.443825    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:29.443825    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:29.443825    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:29.443825    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:29.448097    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:29.448226    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:29.448226    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:29.448226    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:29.448226    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:29.448226    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:29 GMT
	I0923 13:34:29.448226    7084 round_trippers.go:580]     Audit-Id: eb5ecd94-052e-428c-b0f5-cdbb0b9a5c35
	I0923 13:34:29.448226    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:29.448549    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:29.448785    7084 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:34:29.943198    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:29.943198    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:29.943198    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:29.943198    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:29.947767    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:29.947767    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:29.947767    7084 round_trippers.go:580]     Audit-Id: 9293a795-157e-4780-8f3b-d88ff972393a
	I0923 13:34:29.947767    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:29.947767    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:29.947767    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:29.947767    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:29.947767    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:30 GMT
	I0923 13:34:29.948651    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:30.444014    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:30.444115    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:30.444115    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:30.444115    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:30.450186    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:34:30.450312    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:30.450312    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:30.450312    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:30 GMT
	I0923 13:34:30.450312    7084 round_trippers.go:580]     Audit-Id: 470e870b-8ce1-43ca-a23b-782b830ef6cc
	I0923 13:34:30.450312    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:30.450312    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:30.450312    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:30.450469    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:30.943172    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:30.943172    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:30.943172    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:30.943172    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:30.947605    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:30.947605    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:30.947716    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:30.947716    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:30.947716    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:31 GMT
	I0923 13:34:30.947716    7084 round_trippers.go:580]     Audit-Id: 111dba1d-efbe-4966-84e8-d453eda89ca8
	I0923 13:34:30.947716    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:30.947716    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:30.947916    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:31.444359    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:31.444359    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:31.444359    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:31.444359    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:31.448942    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:31.448942    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:31.448942    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:31 GMT
	I0923 13:34:31.448942    7084 round_trippers.go:580]     Audit-Id: 71d5fcd9-00a1-4186-b536-24837ab848a1
	I0923 13:34:31.449041    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:31.449041    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:31.449041    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:31.449041    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:31.449419    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:31.450192    7084 node_ready.go:53] node "multinode-560300" has status "Ready":"False"
	I0923 13:34:31.944391    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:31.944470    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:31.944470    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:31.944470    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:31.947873    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:31.947873    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:31.947873    7084 round_trippers.go:580]     Audit-Id: 7ef5da8e-de20-4172-86d9-ae8b0f001440
	I0923 13:34:31.948079    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:31.948079    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:31.948079    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:31.948079    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:31.948079    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:32 GMT
	I0923 13:34:31.948460    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:32.443390    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:32.443390    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.443390    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.443390    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.446851    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:32.447762    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.447762    7084 round_trippers.go:580]     Audit-Id: e718816a-a853-4faf-98b7-30da9ce7c07d
	I0923 13:34:32.447762    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.447762    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.447762    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.447762    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.447762    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:32 GMT
	I0923 13:34:32.448052    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1785","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5485 chars]
	I0923 13:34:32.943502    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:32.943502    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.943502    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.943502    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.947220    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:32.947665    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.947665    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.947665    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:32.947665    7084 round_trippers.go:580]     Audit-Id: e4d7fd9f-0234-4a76-9dec-f09e95a44a01
	I0923 13:34:32.947665    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.947665    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.947740    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.947896    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:32.948600    7084 node_ready.go:49] node "multinode-560300" has status "Ready":"True"
	I0923 13:34:32.948658    7084 node_ready.go:38] duration metric: took 10.0057193s for node "multinode-560300" to be "Ready" ...
	I0923 13:34:32.948714    7084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:34:32.948889    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:32.948889    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.948889    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.948889    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.954646    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:32.954670    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.954670    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.954670    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:32.954670    7084 round_trippers.go:580]     Audit-Id: 27a9c888-409d-4ff5-b3e6-31dd39a04cf5
	I0923 13:34:32.954670    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.954670    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.954839    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.956152    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1829"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90024 chars]
	I0923 13:34:32.960650    7084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:32.960650    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:32.960650    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.960650    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.960650    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.963644    7084 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 13:34:32.963751    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.963751    7084 round_trippers.go:580]     Audit-Id: 767e7ce8-9691-4ef4-86c3-cd45a19c578a
	I0923 13:34:32.963751    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.963751    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.963829    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.963829    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.963829    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:32.964056    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:32.965180    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:32.965246    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:32.965246    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:32.965246    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:32.967781    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:32.967781    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:32.967862    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:32.967862    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:32.967862    7084 round_trippers.go:580]     Audit-Id: 236ce0fc-2448-48a6-b021-e5e3564e9b66
	I0923 13:34:32.967862    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:32.967862    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:32.967862    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:32.968235    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:33.461201    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:33.461201    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:33.461201    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:33.461201    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:33.465759    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:33.465759    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:33.465759    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:33.465759    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:33.465759    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:33.465857    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:33.465857    7084 round_trippers.go:580]     Audit-Id: 52bea11d-b25b-4a0f-a046-72e008baa47f
	I0923 13:34:33.465857    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:33.466531    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:33.466797    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:33.466797    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:33.466797    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:33.466797    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:33.469823    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:33.469823    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:33.469823    7084 round_trippers.go:580]     Audit-Id: 23ea692e-0aa1-4ce6-8fdb-78e0ac1e9d6d
	I0923 13:34:33.469919    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:33.469919    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:33.469919    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:33.469919    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:33.469919    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:33 GMT
	I0923 13:34:33.470166    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:33.960919    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:33.960919    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:33.960919    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:33.960919    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:33.964864    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:33.964864    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:33.964864    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:33.964864    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:33.964864    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:33.965050    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:34 GMT
	I0923 13:34:33.965050    7084 round_trippers.go:580]     Audit-Id: 1070b9c7-6d3d-49b3-9d65-0a87575304bd
	I0923 13:34:33.965050    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:33.965111    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:33.966437    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:33.966437    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:33.966514    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:33.966514    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:33.969281    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:33.969281    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:33.969281    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:33.969281    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:33.969281    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:33.969281    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:34 GMT
	I0923 13:34:33.969281    7084 round_trippers.go:580]     Audit-Id: b023b399-c0a1-424b-a44e-0ba91c4b161c
	I0923 13:34:33.969281    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:33.969281    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:34.461544    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:34.461544    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:34.461544    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:34.461544    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:34.466744    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:34.466830    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:34.466830    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:34.466830    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:34.466830    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:34.466830    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:34.466830    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:34 GMT
	I0923 13:34:34.466909    7084 round_trippers.go:580]     Audit-Id: 382d7fef-d697-4903-8245-df8ac560e2d6
	I0923 13:34:34.467148    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:34.468111    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:34.468111    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:34.468111    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:34.468111    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:34.472049    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:34.472117    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:34.472117    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:34.472117    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:34 GMT
	I0923 13:34:34.472117    7084 round_trippers.go:580]     Audit-Id: a58abc47-999b-4840-bcc5-a8dbbbfb0ee0
	I0923 13:34:34.472117    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:34.472177    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:34.472177    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:34.472598    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:34.961200    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:34.961200    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:34.961200    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:34.961200    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:34.965430    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:34.965430    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:34.965519    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:35 GMT
	I0923 13:34:34.965519    7084 round_trippers.go:580]     Audit-Id: 3e2fdbfb-d230-475e-b791-fd097549bf6f
	I0923 13:34:34.965519    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:34.965519    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:34.965519    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:34.965519    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:34.965519    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:34.966357    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:34.966357    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:34.966357    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:34.966357    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:34.973198    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:34:34.973198    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:34.973198    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:34.973198    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:34.973198    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:34.973198    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:35 GMT
	I0923 13:34:34.973198    7084 round_trippers.go:580]     Audit-Id: 47615921-14f9-4a3f-828d-bd2cc417ab7f
	I0923 13:34:34.973198    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:34.973198    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:34.973918    7084 pod_ready.go:103] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"False"
	I0923 13:34:35.461482    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:35.461482    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.461482    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.461482    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.465774    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:35.465834    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.465834    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.465834    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.465834    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.465834    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:35 GMT
	I0923 13:34:35.465879    7084 round_trippers.go:580]     Audit-Id: 86320bcf-6d21-43a6-8b8c-21eb1af63f4f
	I0923 13:34:35.465879    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.466281    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1746","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7097 chars]
	I0923 13:34:35.467436    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.467436    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.467436    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.467436    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.474734    7084 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 13:34:35.474734    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.474734    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.474734    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.474734    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.474734    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:35 GMT
	I0923 13:34:35.474734    7084 round_trippers.go:580]     Audit-Id: b6f120e0-ae73-4bee-ae15-aa5e6029c5fe
	I0923 13:34:35.474734    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.474734    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.961269    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:34:35.961269    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.961269    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.961269    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.965651    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:35.965651    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.965651    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.965651    7084 round_trippers.go:580]     Audit-Id: deed854d-755b-4731-8750-c146a513a261
	I0923 13:34:35.965651    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.965651    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.965651    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.965651    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.965862    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0923 13:34:35.966544    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.966618    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.966618    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.966618    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.968759    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.969086    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.969086    7084 round_trippers.go:580]     Audit-Id: cea3089c-74ca-4174-b629-6d3d37be4449
	I0923 13:34:35.969086    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.969086    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.969086    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.969086    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.969086    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.969334    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.970028    7084 pod_ready.go:93] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:35.970080    7084 pod_ready.go:82] duration metric: took 3.0092274s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.970131    7084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.970302    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:34:35.970372    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.970372    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.970589    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.972750    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.973484    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.973484    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.973484    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.973532    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.973532    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.973532    7084 round_trippers.go:580]     Audit-Id: 3edc7c65-c8e1-452b-a627-96548da01d14
	I0923 13:34:35.973532    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.973797    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1822","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0923 13:34:35.974116    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.974116    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.974116    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.974116    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.976691    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.977150    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.977150    7084 round_trippers.go:580]     Audit-Id: 7a85e087-f22c-481c-84bf-c4e8214fb6cb
	I0923 13:34:35.977150    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.977150    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.977199    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.977199    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.977199    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.977429    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.978162    7084 pod_ready.go:93] pod "etcd-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:35.978217    7084 pod_ready.go:82] duration metric: took 7.9736ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.978217    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.978439    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:34:35.978439    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.978439    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.978530    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.983713    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:35.983713    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.983713    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.983713    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.983713    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.983713    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.983713    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.983713    7084 round_trippers.go:580]     Audit-Id: f5bacf51-d3cc-44cf-95c2-9bda12e6d41b
	I0923 13:34:35.983713    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"c88cb5c4-fe30-4354-bf55-1f281cf11190","resourceVersion":"1816","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.156.56:8443","kubernetes.io/config.hash":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.mirror":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.seen":"2024-09-23T13:34:12.942044692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0923 13:34:35.984351    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.984351    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.984351    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.984351    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.987579    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:35.987579    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.987579    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.987579    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.987579    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.987579    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.987579    7084 round_trippers.go:580]     Audit-Id: 130a4489-c829-40f3-9b72-eb4067f8ac64
	I0923 13:34:35.987579    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.987916    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.988320    7084 pod_ready.go:93] pod "kube-apiserver-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:35.988352    7084 pod_ready.go:82] duration metric: took 10.1346ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.988392    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.988466    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:34:35.988498    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.988498    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.988537    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.990653    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.990653    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.990653    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.990653    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.990653    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.990653    7084 round_trippers.go:580]     Audit-Id: 921b7c4d-b6e0-485f-971f-15d865263097
	I0923 13:34:35.990653    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.990653    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.990653    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"1809","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0923 13:34:35.991772    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:35.991772    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.991829    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.991829    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.994222    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.994269    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.994300    7084 round_trippers.go:580]     Audit-Id: df40959d-91e9-4a5c-8eb6-eb033d775488
	I0923 13:34:35.994300    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.994300    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.994300    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.994300    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.994346    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.994496    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:35.994816    7084 pod_ready.go:93] pod "kube-controller-manager-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:35.994894    7084 pod_ready.go:82] duration metric: took 6.5012ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.994894    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:35.994894    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:34:35.994894    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.994894    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.994894    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:35.997301    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:35.997334    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:35.997334    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:35.997377    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:35.997377    7084 round_trippers.go:580]     Audit-Id: a30a6599-3295-43a4-b921-75c3c87ff202
	I0923 13:34:35.997377    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:35.997377    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:35.997377    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:35.997587    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dbkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"44a5a18e-0e93-4293-8d4b-13e3ec9acfef","resourceVersion":"1660","creationTimestamp":"2024-09-23T13:20:08Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:20:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0923 13:34:35.997679    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:34:35.997679    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:35.997679    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:35.997679    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.000450    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:36.000450    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.000450    7084 round_trippers.go:580]     Audit-Id: e6912442-cbc0-4295-9638-745492c131ab
	I0923 13:34:36.000450    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.000503    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.000503    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.000503    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.000503    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.000618    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"781efd95-4e81-4850-a300-9cef56c5e6d4","resourceVersion":"1786","creationTimestamp":"2024-09-23T13:30:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_30_01_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:30:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4304 chars]
	I0923 13:34:36.000784    7084 pod_ready.go:98] node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:34:36.000784    7084 pod_ready.go:82] duration metric: took 5.8894ms for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:36.000784    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:34:36.000784    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:36.162004    7084 request.go:632] Waited for 161.2088ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:34:36.162004    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:34:36.162004    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.162004    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.162004    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.166500    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:36.166582    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.166659    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.166659    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.166659    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.166659    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.166659    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.166659    7084 round_trippers.go:580]     Audit-Id: ea93f38b-24cc-44de-b9bd-d60128e72fd8
	I0923 13:34:36.166790    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"1686","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I0923 13:34:36.361987    7084 request.go:632] Waited for 194.0221ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:34:36.361987    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:34:36.361987    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.361987    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.361987    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.365703    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:36.365703    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.365793    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.365793    7084 round_trippers.go:580]     Audit-Id: a5e8f6ce-22e0-480d-a066-17f57588fc6f
	I0923 13:34:36.365793    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.365793    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.365793    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.365793    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.366070    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d","resourceVersion":"1683","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_15_48_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4486 chars]
	I0923 13:34:36.366892    7084 pod_ready.go:98] node "multinode-560300-m02" hosting pod "kube-proxy-g5t97" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m02" has status "Ready":"Unknown"
	I0923 13:34:36.366965    7084 pod_ready.go:82] duration metric: took 366.1567ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	E0923 13:34:36.366965    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m02" hosting pod "kube-proxy-g5t97" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m02" has status "Ready":"Unknown"
	I0923 13:34:36.366965    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:36.561510    7084 request.go:632] Waited for 194.4251ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:34:36.561858    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:34:36.561858    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.561858    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.561858    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.567339    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:36.567339    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.567339    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.567339    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.567339    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.567339    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.567339    7084 round_trippers.go:580]     Audit-Id: e4178d32-33fa-4885-8e58-0c7bdf0fc9cd
	I0923 13:34:36.567339    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.567339    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"1800","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0923 13:34:36.761568    7084 request.go:632] Waited for 192.9062ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:36.761568    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:36.761568    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.761568    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.761568    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.764743    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:36.764743    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.764743    7084 round_trippers.go:580]     Audit-Id: 76e11f9b-f56a-4e7a-b118-b8e6cb9f754f
	I0923 13:34:36.764743    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.764743    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.764743    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.764743    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.764743    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:36 GMT
	I0923 13:34:36.766459    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:36.766718    7084 pod_ready.go:93] pod "kube-proxy-rgmcw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:36.766718    7084 pod_ready.go:82] duration metric: took 399.7261ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:36.766718    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:36.961791    7084 request.go:632] Waited for 194.4334ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:34:36.962095    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:34:36.962095    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:36.962095    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:36.962095    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:36.965775    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:34:36.965840    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:36.965903    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:36.965903    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:36.965903    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:36.965903    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:36.965903    7084 round_trippers.go:580]     Audit-Id: 3a4bd738-5367-41ed-89bd-c94eb0b00a8d
	I0923 13:34:36.965959    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:36.966093    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"1810","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0923 13:34:37.161958    7084 request.go:632] Waited for 194.9383ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:37.161958    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:34:37.161958    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.161958    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.161958    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.166108    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:37.166108    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.166108    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.166108    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.166108    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.166108    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.166108    7084 round_trippers.go:580]     Audit-Id: 07692773-9451-4820-bd93-f1b5d8effde2
	I0923 13:34:37.166108    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.166108    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:34:37.166730    7084 pod_ready.go:93] pod "kube-scheduler-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:34:37.166730    7084 pod_ready.go:82] duration metric: took 399.4491ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:34:37.166730    7084 pod_ready.go:39] duration metric: took 4.2177311s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:34:37.167317    7084 api_server.go:52] waiting for apiserver process to appear ...
	I0923 13:34:37.179486    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:34:37.200800    7084 command_runner.go:130] > 1960
	I0923 13:34:37.200800    7084 api_server.go:72] duration metric: took 14.5711188s to wait for apiserver process to appear ...
	I0923 13:34:37.200800    7084 api_server.go:88] waiting for apiserver healthz status ...
	I0923 13:34:37.200800    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:34:37.208033    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 200:
	ok
	I0923 13:34:37.208033    7084 round_trippers.go:463] GET https://172.19.156.56:8443/version
	I0923 13:34:37.208033    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.208033    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.208033    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.210777    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:34:37.210777    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Audit-Id: 113e3b70-7cd0-4af4-9b48-3aff7d2d7ac2
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.210777    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.210777    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Content-Length: 263
	I0923 13:34:37.210777    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.210777    7084 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 13:34:37.210777    7084 api_server.go:141] control plane version: v1.31.1
	I0923 13:34:37.210777    7084 api_server.go:131] duration metric: took 9.9767ms to wait for apiserver health ...
	I0923 13:34:37.210777    7084 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 13:34:37.361896    7084 request.go:632] Waited for 151.1091ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:37.362271    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:37.362271    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.362271    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.362271    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.367708    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:37.367708    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.367708    7084 round_trippers.go:580]     Audit-Id: f697420b-b93c-4ae0-9ad8-4cbb3a5dbc56
	I0923 13:34:37.367708    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.367708    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.367708    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.367708    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.367708    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.370230    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1848"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89971 chars]
	I0923 13:34:37.376732    7084 system_pods.go:59] 12 kube-system pods found
	I0923 13:34:37.376815    7084 system_pods.go:61] "coredns-7c65d6cfc9-glx94" [f476c8f8-667a-48d4-84f8-4aa15336cea9] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "etcd-multinode-560300" [477ee4f5-e333-4042-97cd-8457f60fd696] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kindnet-mdnmc" [ffaf3266-f3b8-424f-888b-15aff927de53] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kindnet-qg99z" [0f714fff-dd9b-4ba3-b2e9-6e9e18f21ae9] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kindnet-z9mrc" [c9dfa12e-54ef-4d0b-825e-bcbcaa77b5a9] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-apiserver-multinode-560300" [c88cb5c4-fe30-4354-bf55-1f281cf11190] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-controller-manager-multinode-560300" [aa0d358b-19fd-4553-8a34-f772ba945019] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-proxy-dbkdp" [44a5a18e-0e93-4293-8d4b-13e3ec9acfef] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-proxy-g5t97" [49d7601a-bda4-421e-bde7-acc35c157962] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-proxy-rgmcw" [97050e09-6fc3-4e7b-b00e-07eb9332bf15] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "kube-scheduler-multinode-560300" [01e5d6a3-2eb6-4fa4-8607-072724fb2880] Running
	I0923 13:34:37.376815    7084 system_pods.go:61] "storage-provisioner" [444d1029-f19d-4fa6-b454-c9c710e6d9b2] Running
	I0923 13:34:37.376815    7084 system_pods.go:74] duration metric: took 166.0265ms to wait for pod list to return data ...
	I0923 13:34:37.376815    7084 default_sa.go:34] waiting for default service account to be created ...
	I0923 13:34:37.562445    7084 request.go:632] Waited for 185.6179ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/default/serviceaccounts
	I0923 13:34:37.562750    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/default/serviceaccounts
	I0923 13:34:37.562750    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.562750    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.562750    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.567131    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:37.567131    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.567131    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.567131    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Content-Length: 262
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Audit-Id: 7d46abf1-032c-40e4-8bdc-d314dbbfbbd0
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.567131    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.567131    7084 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1848"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6aaed0f9-99f6-4dde-94ff-d8ba898738d6","resourceVersion":"351","creationTimestamp":"2024-09-23T13:12:59Z"}}]}
	I0923 13:34:37.567808    7084 default_sa.go:45] found service account: "default"
	I0923 13:34:37.567900    7084 default_sa.go:55] duration metric: took 191.0728ms for default service account to be created ...
	I0923 13:34:37.567900    7084 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 13:34:37.761462    7084 request.go:632] Waited for 193.4423ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:37.761462    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:34:37.761462    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.761462    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.761462    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.766564    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:34:37.766650    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.766650    7084 round_trippers.go:580]     Audit-Id: 497101ae-4737-4658-8da4-0db07c330a7c
	I0923 13:34:37.766705    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.766705    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.766705    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.766705    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.766705    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:37 GMT
	I0923 13:34:37.768613    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1848"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89971 chars]
	I0923 13:34:37.773732    7084 system_pods.go:86] 12 kube-system pods found
	I0923 13:34:37.773823    7084 system_pods.go:89] "coredns-7c65d6cfc9-glx94" [f476c8f8-667a-48d4-84f8-4aa15336cea9] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "etcd-multinode-560300" [477ee4f5-e333-4042-97cd-8457f60fd696] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kindnet-mdnmc" [ffaf3266-f3b8-424f-888b-15aff927de53] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kindnet-qg99z" [0f714fff-dd9b-4ba3-b2e9-6e9e18f21ae9] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kindnet-z9mrc" [c9dfa12e-54ef-4d0b-825e-bcbcaa77b5a9] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-apiserver-multinode-560300" [c88cb5c4-fe30-4354-bf55-1f281cf11190] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-controller-manager-multinode-560300" [aa0d358b-19fd-4553-8a34-f772ba945019] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-proxy-dbkdp" [44a5a18e-0e93-4293-8d4b-13e3ec9acfef] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-proxy-g5t97" [49d7601a-bda4-421e-bde7-acc35c157962] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-proxy-rgmcw" [97050e09-6fc3-4e7b-b00e-07eb9332bf15] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "kube-scheduler-multinode-560300" [01e5d6a3-2eb6-4fa4-8607-072724fb2880] Running
	I0923 13:34:37.773823    7084 system_pods.go:89] "storage-provisioner" [444d1029-f19d-4fa6-b454-c9c710e6d9b2] Running
	I0923 13:34:37.773823    7084 system_pods.go:126] duration metric: took 205.9088ms to wait for k8s-apps to be running ...
	I0923 13:34:37.773823    7084 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:34:37.781033    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:34:37.804835    7084 system_svc.go:56] duration metric: took 31.0102ms WaitForService to wait for kubelet
	I0923 13:34:37.804977    7084 kubeadm.go:582] duration metric: took 15.1752124s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:34:37.805006    7084 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:34:37.961594    7084 request.go:632] Waited for 156.4601ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes
	I0923 13:34:37.961594    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes
	I0923 13:34:37.961594    7084 round_trippers.go:469] Request Headers:
	I0923 13:34:37.961594    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:34:37.961594    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:34:37.966164    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:34:37.966223    7084 round_trippers.go:577] Response Headers:
	I0923 13:34:37.966223    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:34:37.966223    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:34:37.966223    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:34:38 GMT
	I0923 13:34:37.966223    7084 round_trippers.go:580]     Audit-Id: f8479b5e-2545-402a-8deb-5fac0f417e3f
	I0923 13:34:37.966223    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:34:37.966223    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:34:37.966223    7084 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1848"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16065 chars]
	I0923 13:34:37.968192    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:37.968321    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:37.968321    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:37.968321    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:37.968321    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:34:37.968321    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:34:37.968321    7084 node_conditions.go:105] duration metric: took 163.3032ms to run NodePressure ...
	I0923 13:34:37.968436    7084 start.go:241] waiting for startup goroutines ...
	I0923 13:34:37.968436    7084 start.go:246] waiting for cluster config update ...
	I0923 13:34:37.968436    7084 start.go:255] writing updated cluster config ...
	I0923 13:34:37.972041    7084 out.go:201] 
	I0923 13:34:37.975251    7084 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:34:37.985805    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:34:37.985938    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:34:37.990678    7084 out.go:177] * Starting "multinode-560300-m02" worker node in "multinode-560300" cluster
	I0923 13:34:37.993031    7084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:34:37.994029    7084 cache.go:56] Caching tarball of preloaded images
	I0923 13:34:37.994185    7084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 13:34:37.994185    7084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:34:37.994185    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:34:37.995400    7084 start.go:360] acquireMachinesLock for multinode-560300-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:34:37.996381    7084 start.go:364] duration metric: took 981µs to acquireMachinesLock for "multinode-560300-m02"
	I0923 13:34:37.996381    7084 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:34:37.996381    7084 fix.go:54] fixHost starting: m02
	I0923 13:34:37.996983    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:39.809217    7084 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 13:34:39.809685    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:39.809685    7084 fix.go:112] recreateIfNeeded on multinode-560300-m02: state=Stopped err=<nil>
	W0923 13:34:39.809685    7084 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:34:39.813166    7084 out.go:177] * Restarting existing hyperv VM for "multinode-560300-m02" ...
	I0923 13:34:39.815442    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-560300-m02
	I0923 13:34:42.542544    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:34:42.542544    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:42.542899    7084 main.go:141] libmachine: Waiting for host to start...
	I0923 13:34:42.542899    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:44.505211    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:34:44.505211    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:44.505211    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:34:46.706131    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:34:46.706131    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:47.706556    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:49.637596    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:34:49.638305    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:49.638305    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:34:51.828815    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:34:51.828815    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:52.829244    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:54.726159    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:34:54.726619    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:54.726619    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:34:56.894069    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:34:56.894069    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:57.894477    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:34:59.812955    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:34:59.812955    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:34:59.813356    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:02.053652    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:35:02.053942    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:03.055265    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:05.029320    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:05.029320    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:05.029320    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:07.459752    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:07.460126    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:07.463492    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:09.370891    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:09.370891    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:09.371826    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:11.681787    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:11.681973    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:11.681973    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:35:11.684088    7084 machine.go:93] provisionDockerMachine start ...
	I0923 13:35:11.684153    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:13.619849    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:13.620050    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:13.620050    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:15.918661    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:15.918661    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:15.922865    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:15.922865    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:15.922865    7084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:35:16.056732    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:35:16.056732    7084 buildroot.go:166] provisioning hostname "multinode-560300-m02"
	I0923 13:35:16.057269    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:17.980566    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:17.980566    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:17.980566    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:20.295878    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:20.295878    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:20.299670    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:20.300313    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:20.300313    7084 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-560300-m02 && echo "multinode-560300-m02" | sudo tee /etc/hostname
	I0923 13:35:20.466164    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-560300-m02
	
	I0923 13:35:20.466707    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:22.389514    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:22.389514    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:22.389514    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:24.649347    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:24.649347    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:24.653957    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:24.653957    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:24.653957    7084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-560300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-560300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-560300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:35:24.814634    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:35:24.814634    7084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 13:35:24.814634    7084 buildroot.go:174] setting up certificates
	I0923 13:35:24.814634    7084 provision.go:84] configureAuth start
	I0923 13:35:24.814634    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:26.733306    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:26.733306    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:26.733306    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:29.020658    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:29.020658    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:29.020658    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:30.930455    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:30.930455    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:30.931247    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:33.153362    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:33.154229    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:33.154229    7084 provision.go:143] copyHostCerts
	I0923 13:35:33.154439    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 13:35:33.154661    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 13:35:33.154661    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 13:35:33.155063    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 13:35:33.156143    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 13:35:33.156437    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 13:35:33.156516    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 13:35:33.157017    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 13:35:33.158217    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 13:35:33.158249    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 13:35:33.158249    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 13:35:33.158249    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 13:35:33.159639    7084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-560300-m02 san=[127.0.0.1 172.19.147.0 localhost minikube multinode-560300-m02]
	I0923 13:35:33.295795    7084 provision.go:177] copyRemoteCerts
	I0923 13:35:33.304719    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:35:33.305314    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:35.148187    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:35.148187    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:35.148446    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:37.377806    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:37.377806    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:37.378933    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:35:37.483765    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.178157s)
	I0923 13:35:37.483838    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 13:35:37.483838    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:35:37.524211    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 13:35:37.524475    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0923 13:35:37.563616    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 13:35:37.564209    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 13:35:37.605585    7084 provision.go:87] duration metric: took 12.7900878s to configureAuth
	I0923 13:35:37.605680    7084 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:35:37.606305    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:35:37.606414    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:39.460470    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:39.461012    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:39.461079    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:41.649608    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:41.649608    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:41.653807    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:41.654156    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:41.654156    7084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:35:41.795099    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 13:35:41.795099    7084 buildroot.go:70] root file system type: tmpfs
	I0923 13:35:41.795329    7084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:35:41.795329    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:43.630208    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:43.630208    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:43.630208    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:45.830017    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:45.830017    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:45.834095    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:45.834192    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:45.834192    7084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.156.56"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:35:46.013447    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.156.56
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:35:46.013579    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:47.892547    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:47.893564    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:47.893750    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:50.137028    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:50.137028    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:50.141166    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:35:50.141773    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:35:50.141773    7084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:35:52.444305    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 13:35:52.444357    7084 machine.go:96] duration metric: took 40.7575175s to provisionDockerMachine
	I0923 13:35:52.444424    7084 start.go:293] postStartSetup for "multinode-560300-m02" (driver="hyperv")
	I0923 13:35:52.444480    7084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:35:52.455831    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:35:52.455831    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:54.295962    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:54.295962    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:54.296542    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:35:56.560763    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:35:56.560763    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:56.561417    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:35:56.675255    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.21914s)
	I0923 13:35:56.684060    7084 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:35:56.693981    7084 command_runner.go:130] > NAME=Buildroot
	I0923 13:35:56.694652    7084 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:35:56.694688    7084 command_runner.go:130] > ID=buildroot
	I0923 13:35:56.694688    7084 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:35:56.694688    7084 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:35:56.694942    7084 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:35:56.695009    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 13:35:56.695009    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 13:35:56.695009    7084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 13:35:56.695009    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 13:35:56.704792    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:35:56.720657    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 13:35:56.764536    7084 start.go:296] duration metric: took 4.3198208s for postStartSetup
	I0923 13:35:56.764536    7084 fix.go:56] duration metric: took 1m18.7628379s for fixHost
	I0923 13:35:56.764536    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:35:58.599988    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:35:58.599988    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:35:58.600063    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:00.780434    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:00.780434    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:00.784402    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:36:00.784774    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:36:00.784847    7084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:36:00.933863    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098561.142260224
	
	I0923 13:36:00.933958    7084 fix.go:216] guest clock: 1727098561.142260224
	I0923 13:36:00.933958    7084 fix.go:229] Guest: 2024-09-23 13:36:01.142260224 +0000 UTC Remote: 2024-09-23 13:35:56.7645364 +0000 UTC m=+215.749594001 (delta=4.377723824s)
	I0923 13:36:00.933958    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:02.788774    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:02.788774    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:02.788845    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:05.024843    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:05.025710    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:05.029525    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:36:05.029925    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.147.0 22 <nil> <nil>}
	I0923 13:36:05.029999    7084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727098560
	I0923 13:36:05.177960    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 13:36:00 UTC 2024
	
	I0923 13:36:05.177960    7084 fix.go:236] clock set: Mon Sep 23 13:36:00 UTC 2024
	 (err=<nil>)
	I0923 13:36:05.177960    7084 start.go:83] releasing machines lock for "multinode-560300-m02", held for 1m27.1756945s
	I0923 13:36:05.177960    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:07.034702    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:07.034702    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:07.034702    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:09.311777    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:09.311777    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:09.314188    7084 out.go:177] * Found network options:
	I0923 13:36:09.316740    7084 out.go:177]   - NO_PROXY=172.19.156.56
	W0923 13:36:09.319110    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:36:09.321118    7084 out.go:177]   - NO_PROXY=172.19.156.56
	W0923 13:36:09.324063    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:36:09.325562    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:36:09.327996    7084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 13:36:09.327996    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:09.335604    7084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:36:09.336598    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:11.281563    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:13.592675    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:13.593498    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:13.593498    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:36:13.611178    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:13.611532    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:13.611804    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:36:13.685628    7084 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 13:36:13.685724    7084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.3574343s)
	W0923 13:36:13.685724    7084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 13:36:13.718008    7084 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0923 13:36:13.718008    7084 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3821085s)
	W0923 13:36:13.718008    7084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:36:13.727942    7084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:36:13.760346    7084 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 13:36:13.760346    7084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:36:13.760346    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:36:13.760346    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0923 13:36:13.777784    7084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 13:36:13.777784    7084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 13:36:13.796877    7084 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 13:36:13.805712    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 13:36:13.833530    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 13:36:13.852941    7084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:36:13.861736    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:36:13.891622    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:36:13.919313    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:36:13.946089    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:36:13.975060    7084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:36:14.003428    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:36:14.031135    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:36:14.059868    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:36:14.087652    7084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:36:14.103302    7084 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:36:14.103302    7084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:36:14.112125    7084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:36:14.141573    7084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:36:14.174152    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:14.348528    7084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:36:14.378053    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:36:14.391873    7084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:36:14.414349    7084 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 13:36:14.414445    7084 command_runner.go:130] > [Unit]
	I0923 13:36:14.414445    7084 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 13:36:14.414445    7084 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 13:36:14.414445    7084 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 13:36:14.414445    7084 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 13:36:14.414445    7084 command_runner.go:130] > StartLimitBurst=3
	I0923 13:36:14.414445    7084 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 13:36:14.414445    7084 command_runner.go:130] > [Service]
	I0923 13:36:14.414445    7084 command_runner.go:130] > Type=notify
	I0923 13:36:14.414445    7084 command_runner.go:130] > Restart=on-failure
	I0923 13:36:14.414992    7084 command_runner.go:130] > Environment=NO_PROXY=172.19.156.56
	I0923 13:36:14.414992    7084 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 13:36:14.415158    7084 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 13:36:14.415158    7084 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 13:36:14.415158    7084 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 13:36:14.415158    7084 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 13:36:14.415158    7084 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 13:36:14.415158    7084 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 13:36:14.415158    7084 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 13:36:14.415158    7084 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 13:36:14.415158    7084 command_runner.go:130] > ExecStart=
	I0923 13:36:14.415158    7084 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 13:36:14.415158    7084 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 13:36:14.415158    7084 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 13:36:14.415698    7084 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 13:36:14.415698    7084 command_runner.go:130] > LimitNOFILE=infinity
	I0923 13:36:14.415698    7084 command_runner.go:130] > LimitNPROC=infinity
	I0923 13:36:14.415698    7084 command_runner.go:130] > LimitCORE=infinity
	I0923 13:36:14.415776    7084 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 13:36:14.416098    7084 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 13:36:14.416098    7084 command_runner.go:130] > TasksMax=infinity
	I0923 13:36:14.416098    7084 command_runner.go:130] > TimeoutStartSec=0
	I0923 13:36:14.416098    7084 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 13:36:14.416098    7084 command_runner.go:130] > Delegate=yes
	I0923 13:36:14.416098    7084 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 13:36:14.416098    7084 command_runner.go:130] > KillMode=process
	I0923 13:36:14.416098    7084 command_runner.go:130] > [Install]
	I0923 13:36:14.416098    7084 command_runner.go:130] > WantedBy=multi-user.target
	I0923 13:36:14.425001    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:36:14.451304    7084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:36:14.488332    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:36:14.520359    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:36:14.551137    7084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:36:14.612117    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:36:14.634311    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:36:14.664435    7084 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 13:36:14.674725    7084 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:36:14.680730    7084 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 13:36:14.687724    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:36:14.704598    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:36:14.747294    7084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:36:14.919247    7084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:36:15.088871    7084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:36:15.088999    7084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:36:15.131899    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:15.309103    7084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:36:17.930753    7084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6214727s)
	I0923 13:36:17.945404    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:36:17.979136    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:36:18.012751    7084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:36:18.204263    7084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:36:18.405143    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:18.599304    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:36:18.639727    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:36:18.671787    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:18.855165    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:36:18.964412    7084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:36:18.974388    7084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:36:18.983388    7084 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 13:36:18.983388    7084 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:36:18.983388    7084 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0923 13:36:18.983388    7084 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 13:36:18.983388    7084 command_runner.go:130] > Access: 2024-09-23 13:36:19.092396564 +0000
	I0923 13:36:18.983388    7084 command_runner.go:130] > Modify: 2024-09-23 13:36:19.092396564 +0000
	I0923 13:36:18.983388    7084 command_runner.go:130] > Change: 2024-09-23 13:36:19.095396707 +0000
	I0923 13:36:18.983388    7084 command_runner.go:130] >  Birth: -
	I0923 13:36:18.983388    7084 start.go:563] Will wait 60s for crictl version
	I0923 13:36:18.992392    7084 ssh_runner.go:195] Run: which crictl
	I0923 13:36:18.998484    7084 command_runner.go:130] > /usr/bin/crictl
	I0923 13:36:19.006926    7084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:36:19.062185    7084 command_runner.go:130] > Version:  0.1.0
	I0923 13:36:19.062185    7084 command_runner.go:130] > RuntimeName:  docker
	I0923 13:36:19.062185    7084 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 13:36:19.062307    7084 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:36:19.062307    7084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:36:19.073211    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:36:19.106821    7084 command_runner.go:130] > 27.3.0
	I0923 13:36:19.115070    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:36:19.142496    7084 command_runner.go:130] > 27.3.0
	I0923 13:36:19.145526    7084 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:36:19.148988    7084 out.go:177]   - env NO_PROXY=172.19.156.56
	I0923 13:36:19.151055    7084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 13:36:19.154681    7084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 13:36:19.154681    7084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 13:36:19.154681    7084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 13:36:19.154681    7084 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 13:36:19.156973    7084 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 13:36:19.156973    7084 ip.go:214] interface addr: 172.19.144.1/20
	I0923 13:36:19.165053    7084 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 13:36:19.171158    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:36:19.191364    7084 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:36:19.191978    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:36:19.192503    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:36:21.049563    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:21.049563    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:21.049563    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:36:21.050315    7084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300 for IP: 172.19.147.0
	I0923 13:36:21.050315    7084 certs.go:194] generating shared ca certs ...
	I0923 13:36:21.050415    7084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:36:21.050822    7084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 13:36:21.051139    7084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 13:36:21.051256    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:36:21.051469    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:36:21.051561    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:36:21.051758    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:36:21.052050    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 13:36:21.052254    7084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 13:36:21.052356    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 13:36:21.052550    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 13:36:21.052748    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 13:36:21.053043    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 13:36:21.053375    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 13:36:21.053537    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.053630    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.053729    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.053917    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:36:21.104837    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 13:36:21.152343    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:36:21.197502    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:36:21.240643    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:36:21.286631    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 13:36:21.336928    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 13:36:21.389431    7084 ssh_runner.go:195] Run: openssl version
	I0923 13:36:21.398222    7084 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:36:21.407147    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 13:36:21.434123    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.440442    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.440442    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.448873    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 13:36:21.456402    7084 command_runner.go:130] > 3ec20f2e
	I0923 13:36:21.465009    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:36:21.491192    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:36:21.520176    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.529893    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.529893    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.538236    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:36:21.546144    7084 command_runner.go:130] > b5213941
	I0923 13:36:21.553889    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:36:21.581219    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 13:36:21.607943    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.614438    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.614438    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.622163    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 13:36:21.633943    7084 command_runner.go:130] > 51391683
	I0923 13:36:21.647027    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 13:36:21.676342    7084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:36:21.683019    7084 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:36:21.683112    7084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:36:21.683300    7084 kubeadm.go:934] updating node {m02 172.19.147.0 8443 v1.31.1 docker false true} ...
	I0923 13:36:21.683516    7084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-560300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.147.0
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:36:21.691313    7084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:36:21.707891    7084 command_runner.go:130] > kubeadm
	I0923 13:36:21.707891    7084 command_runner.go:130] > kubectl
	I0923 13:36:21.707891    7084 command_runner.go:130] > kubelet
	I0923 13:36:21.707891    7084 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:36:21.716283    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0923 13:36:21.732905    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0923 13:36:21.760833    7084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:36:21.797865    7084 ssh_runner.go:195] Run: grep 172.19.156.56	control-plane.minikube.internal$ /etc/hosts
	I0923 13:36:21.803914    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.156.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:36:21.834957    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:22.024617    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:36:22.053071    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:36:22.053706    7084 start.go:317] joinCluster: &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.154.147 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:36:22.053882    7084 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:36:22.053950    7084 host.go:66] Checking if "multinode-560300-m02" exists ...
	I0923 13:36:22.054448    7084 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:36:22.054840    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:36:22.055599    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:36:23.962100    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:23.962100    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:23.962100    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:36:23.962727    7084 api_server.go:166] Checking apiserver status ...
	I0923 13:36:23.971561    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:36:23.971561    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:36:25.907789    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:25.908069    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:25.908069    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:28.196478    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:36:28.197271    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:28.197502    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:36:28.317786    7084 command_runner.go:130] > 1960
	I0923 13:36:28.317871    7084 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.3460173s)
	I0923 13:36:28.329761    7084 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup
	W0923 13:36:28.347670    7084 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:36:28.357817    7084 ssh_runner.go:195] Run: ls
	I0923 13:36:28.364704    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:36:28.372709    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 200:
	ok
	I0923 13:36:28.380991    7084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-560300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0923 13:36:28.544540    7084 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-qg99z, kube-system/kube-proxy-g5t97
	I0923 13:36:31.577230    7084 command_runner.go:130] > node/multinode-560300-m02 cordoned
	I0923 13:36:31.577230    7084 command_runner.go:130] > pod "busybox-7dff88458-h4tgf" has DeletionTimestamp older than 1 seconds, skipping
	I0923 13:36:31.577230    7084 command_runner.go:130] > node/multinode-560300-m02 drained
	I0923 13:36:31.577230    7084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-560300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1960231s)
	I0923 13:36:31.577230    7084 node.go:128] successfully drained node "multinode-560300-m02"
	I0923 13:36:31.577230    7084 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0923 13:36:31.577230    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:36:33.478159    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:33.478969    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:33.478969    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:35.776912    7084 main.go:141] libmachine: [stdout =====>] : 172.19.147.0
	
	I0923 13:36:35.776912    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:35.777244    7084 sshutil.go:53] new ssh client: &{IP:172.19.147.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:36:36.203651    7084 command_runner.go:130] ! W0923 13:36:36.414712    1626 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0923 13:36:36.387949    7084 command_runner.go:130] ! W0923 13:36:36.599117    1626 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod 702a0be4f578ab523bfb36ecbcabeeaa4f27321a3db902ef507a9b1288a59f98: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-7dff88458-h4tgf_default" network: cni config uninitialized
	I0923 13:36:36.404945    7084 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:36:36.404945    7084 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0923 13:36:36.405669    7084 command_runner.go:130] > [reset] Stopping the kubelet service
	I0923 13:36:36.405669    7084 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0923 13:36:36.405669    7084 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0923 13:36:36.405669    7084 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0923 13:36:36.405761    7084 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0923 13:36:36.405761    7084 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0923 13:36:36.405761    7084 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0923 13:36:36.405761    7084 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0923 13:36:36.405761    7084 command_runner.go:130] > to reset your system's IPVS tables.
	I0923 13:36:36.405761    7084 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0923 13:36:36.405761    7084 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0923 13:36:36.405761    7084 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.8282054s)
	I0923 13:36:36.405761    7084 node.go:155] successfully reset node "multinode-560300-m02"
	I0923 13:36:36.406567    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:36:36.407168    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:36:36.408382    7084 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 13:36:36.408382    7084 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0923 13:36:36.408382    7084 round_trippers.go:463] DELETE https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:36.408382    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:36.408382    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:36.408382    7084 round_trippers.go:473]     Content-Type: application/json
	I0923 13:36:36.408382    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:36.426908    7084 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0923 13:36:36.426908    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Audit-Id: dd630e74-552d-4ffd-92b7-e03407fd930b
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:36.426908    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:36.426908    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Content-Length: 171
	I0923 13:36:36.426908    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:36 GMT
	I0923 13:36:36.426908    7084 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-560300-m02","kind":"nodes","uid":"2c34b716-9f14-4101-b3a4-7e1e7b05e18d"}}
	I0923 13:36:36.426908    7084 node.go:180] successfully deleted node "multinode-560300-m02"
	I0923 13:36:36.426908    7084 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:36:36.426908    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 13:36:36.426908    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:36:38.258534    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:36:38.258534    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:38.258854    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:36:40.468700    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:36:40.468700    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:36:40.469544    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:36:40.640050    7084 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 70r3de.qyuhbp2j0cw96rtj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 13:36:40.640050    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.2128576s)
	I0923 13:36:40.640376    7084 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:36:40.640376    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 70r3de.qyuhbp2j0cw96rtj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m02"
	I0923 13:36:40.705551    7084 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:36:40.875996    7084 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0923 13:36:40.876082    7084 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0923 13:36:40.939063    7084 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:36:40.939063    7084 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:36:40.939385    7084 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 13:36:41.142606    7084 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:36:42.144353    7084 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001330567s
	I0923 13:36:42.144493    7084 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0923 13:36:42.178663    7084 command_runner.go:130] > This node has joined the cluster:
	I0923 13:36:42.178743    7084 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0923 13:36:42.178743    7084 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0923 13:36:42.178743    7084 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0923 13:36:42.182137    7084 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:36:42.182254    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 70r3de.qyuhbp2j0cw96rtj --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m02": (1.5417002s)
	I0923 13:36:42.182254    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 13:36:42.542705    7084 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0923 13:36:42.556805    7084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-560300-m02 minikube.k8s.io/updated_at=2024_09_23T13_36_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=multinode-560300 minikube.k8s.io/primary=false
	I0923 13:36:42.682325    7084 command_runner.go:130] > node/multinode-560300-m02 labeled
	I0923 13:36:42.682325    7084 start.go:319] duration metric: took 20.6272268s to joinCluster
	I0923 13:36:42.682325    7084 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0923 13:36:42.683714    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:36:42.685993    7084 out.go:177] * Verifying Kubernetes components...
	I0923 13:36:42.700468    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:36:42.919635    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:36:42.951820    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:36:42.952508    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:36:42.953056    7084 node_ready.go:35] waiting up to 6m0s for node "multinode-560300-m02" to be "Ready" ...
	I0923 13:36:42.953056    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:42.953056    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:42.953056    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:42.953056    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:42.956583    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:42.956924    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:42.956924    7084 round_trippers.go:580]     Audit-Id: 0e115d29-08f7-48c4-8646-d4b996ded1be
	I0923 13:36:42.956924    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:42.956924    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:42.957006    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:42.957006    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:42.957006    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:43 GMT
	I0923 13:36:42.957164    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:43.453639    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:43.453639    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:43.453639    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:43.453639    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:43.458112    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:43.458112    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:43.458112    7084 round_trippers.go:580]     Audit-Id: d1c64bbc-36b7-4837-b06b-7ef6dcb16309
	I0923 13:36:43.458112    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:43.458112    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:43.458112    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:43.458112    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:43.458112    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:43 GMT
	I0923 13:36:43.458112    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:43.954233    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:43.954233    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:43.954233    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:43.954233    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:43.957698    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:43.957786    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:43.957786    7084 round_trippers.go:580]     Audit-Id: 1caa83bf-e563-4abe-bd84-41b50a64a2b0
	I0923 13:36:43.957786    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:43.957786    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:43.957786    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:43.957786    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:43.957786    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:44 GMT
	I0923 13:36:43.957944    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:44.453811    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:44.453811    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:44.453811    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:44.453811    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:44.457868    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:44.457868    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:44.457983    7084 round_trippers.go:580]     Audit-Id: 1f456e7e-02f7-4ba6-9fa4-1e24ea21ae0b
	I0923 13:36:44.457983    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:44.457983    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:44.457983    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:44.457983    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:44.457983    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:44 GMT
	I0923 13:36:44.458141    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:44.953590    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:44.953590    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:44.953590    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:44.953590    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:44.957452    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:44.957452    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:44.957452    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:45 GMT
	I0923 13:36:44.957452    7084 round_trippers.go:580]     Audit-Id: 71961eab-5dc1-4f00-91bc-efa8dcbb10ca
	I0923 13:36:44.957452    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:44.957452    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:44.957452    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:44.957452    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:44.957452    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:44.958239    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:45.453311    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:45.453311    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:45.453311    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:45.453311    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:45.457106    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:45.457106    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:45.457106    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:45 GMT
	I0923 13:36:45.457106    7084 round_trippers.go:580]     Audit-Id: 752ffd32-7ecf-48ed-9a1a-00235ff1999a
	I0923 13:36:45.457106    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:45.457106    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:45.457106    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:45.457106    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:45.457319    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:45.954151    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:45.954151    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:45.954151    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:45.954151    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:45.958377    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:45.958377    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:45.958377    7084 round_trippers.go:580]     Audit-Id: 5104288e-110a-42a7-b126-18ea340a3e42
	I0923 13:36:45.958377    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:45.958377    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:45.958377    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:45.958377    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:45.958377    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:46 GMT
	I0923 13:36:45.958569    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1971","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3565 chars]
	I0923 13:36:46.453895    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:46.453895    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:46.453895    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:46.453895    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:46.456867    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:46.456867    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:46.456867    7084 round_trippers.go:580]     Audit-Id: 29c24c5d-39a1-462a-a58e-5dffcdb7b97a
	I0923 13:36:46.456941    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:46.456941    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:46.456941    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:46.456941    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:46.456941    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:46 GMT
	I0923 13:36:46.456941    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:46.954589    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:46.954589    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:46.954589    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:46.954589    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:46.958204    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:46.958243    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:46.958243    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:46.958243    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:46.958243    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:46.958243    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:47 GMT
	I0923 13:36:46.958243    7084 round_trippers.go:580]     Audit-Id: d49f5f32-09c8-42df-a4b9-55098feff7bc
	I0923 13:36:46.958243    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:46.958243    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:47.453801    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:47.453801    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:47.453801    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:47.453801    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:47.458361    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:47.458698    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:47.458698    7084 round_trippers.go:580]     Audit-Id: 2e9e482d-4bae-43ca-be38-87bc9379904d
	I0923 13:36:47.458698    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:47.458698    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:47.458698    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:47.458698    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:47.458698    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:47 GMT
	I0923 13:36:47.458895    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:47.459257    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:47.953595    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:47.953595    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:47.953595    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:47.953595    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:47.959065    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:36:47.959065    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:47.959065    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:47.959122    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:47.959122    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:48 GMT
	I0923 13:36:47.959122    7084 round_trippers.go:580]     Audit-Id: 5c36be26-e9b5-461d-b2e2-24c73620f4d0
	I0923 13:36:47.959122    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:47.959122    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:47.959227    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:48.453780    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:48.453780    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:48.453780    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:48.453780    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:48.458381    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:48.458381    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:48.458381    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:48.458381    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:48.458381    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:48.458381    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:48 GMT
	I0923 13:36:48.458381    7084 round_trippers.go:580]     Audit-Id: 1c7bbb79-348a-4158-9192-a070c24fc6cc
	I0923 13:36:48.458381    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:48.458692    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:48.954303    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:48.954303    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:48.954303    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:48.954303    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:48.958320    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:48.958320    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:48.958320    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:48.958320    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:49 GMT
	I0923 13:36:48.958320    7084 round_trippers.go:580]     Audit-Id: 4defb95a-4a0d-45d6-a3c6-d94484cae8dd
	I0923 13:36:48.958320    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:48.958320    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:48.958320    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:48.958320    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:49.453822    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:49.453822    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:49.453822    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:49.453822    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:49.457701    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:49.457784    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:49.457784    7084 round_trippers.go:580]     Audit-Id: 9dc45b75-65b6-40d3-9ba3-6cc378cc0031
	I0923 13:36:49.457784    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:49.457784    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:49.457858    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:49.457858    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:49.457858    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:49 GMT
	I0923 13:36:49.457960    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:49.954478    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:49.954478    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:49.954478    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:49.954478    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:49.958687    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:49.958687    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:49.958687    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:49.958687    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:50 GMT
	I0923 13:36:49.958687    7084 round_trippers.go:580]     Audit-Id: ec51b95a-be03-4a5e-8872-f4e3a6a37ce8
	I0923 13:36:49.958687    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:49.958687    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:49.958687    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:49.958687    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:49.959548    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:50.453911    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:50.453911    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:50.453911    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:50.453911    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:50.457919    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:50.458214    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:50.458275    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:50.458275    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:50.458275    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:50.458275    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:50.458275    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:50 GMT
	I0923 13:36:50.458275    7084 round_trippers.go:580]     Audit-Id: 29b4b7f1-fa89-4ae3-a304-fa67673f026a
	I0923 13:36:50.458412    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:50.955158    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:50.955158    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:50.955158    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:50.955158    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:50.958507    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:50.958583    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:50.958648    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:50.958648    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:50.958672    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:51 GMT
	I0923 13:36:50.958672    7084 round_trippers.go:580]     Audit-Id: ae2e7d6a-f2a9-4ca8-8331-4ceeaa528a7a
	I0923 13:36:50.958672    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:50.958672    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:50.958795    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:51.454346    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:51.454346    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:51.454346    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:51.454346    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:51.458974    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:51.458974    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:51.458974    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:51.458974    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:51.458974    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:51.458974    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:51.458974    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:51 GMT
	I0923 13:36:51.459099    7084 round_trippers.go:580]     Audit-Id: e3c76321-bb07-46a8-9c44-15d3d1eb222f
	I0923 13:36:51.459594    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:51.955218    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:51.955218    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:51.955218    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:51.955218    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:51.957546    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:51.958505    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:51.958505    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:51.958505    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:51.958505    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:52 GMT
	I0923 13:36:51.958505    7084 round_trippers.go:580]     Audit-Id: fa71ed6f-0274-48e0-bb1c-e847d04ed3c8
	I0923 13:36:51.958505    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:51.958505    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:51.958607    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"1994","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3674 chars]
	I0923 13:36:52.454641    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:52.454641    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:52.454641    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:52.454641    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:52.459781    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:36:52.459895    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:52.459895    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:52 GMT
	I0923 13:36:52.459895    7084 round_trippers.go:580]     Audit-Id: 68cff358-6dc9-49e8-a18b-0c7f581ca79f
	I0923 13:36:52.459895    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:52.459895    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:52.459895    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:52.459895    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:52.460052    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:52.460154    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:52.954540    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:52.954540    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:52.954540    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:52.954540    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:52.958387    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:52.958387    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:52.958387    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:52.958387    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:52.958387    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:53 GMT
	I0923 13:36:52.958387    7084 round_trippers.go:580]     Audit-Id: 4ea2b207-d49f-4ae6-98b7-31819d70f76f
	I0923 13:36:52.958387    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:52.958387    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:52.958917    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:53.454665    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:53.454665    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:53.454665    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:53.454665    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:53.458593    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:53.458593    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:53.458593    7084 round_trippers.go:580]     Audit-Id: 7ece5a8c-b93d-441b-bb64-140a2aab3421
	I0923 13:36:53.458696    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:53.458696    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:53.458696    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:53.458696    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:53.458696    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:53 GMT
	I0923 13:36:53.458834    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:53.954823    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:53.954823    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:53.954823    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:53.954823    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:53.958679    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:53.958679    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:53.958679    7084 round_trippers.go:580]     Audit-Id: 581adf1d-fb67-404b-80ab-f5b5d5245800
	I0923 13:36:53.958679    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:53.958679    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:53.958679    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:53.958679    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:53.958679    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:54 GMT
	I0923 13:36:53.958679    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:54.455207    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:54.455207    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:54.455207    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:54.455207    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:54.459396    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:54.459396    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:54.459396    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:54.459396    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:54 GMT
	I0923 13:36:54.459396    7084 round_trippers.go:580]     Audit-Id: c543700e-f752-4757-a270-9f6ed98efb4c
	I0923 13:36:54.459590    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:54.459590    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:54.459590    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:54.459797    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:54.460296    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:54.954116    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:54.954116    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:54.954116    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:54.954716    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:54.958038    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:54.958038    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:54.958125    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:54.958125    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:54.958125    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:54.958125    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:55 GMT
	I0923 13:36:54.958186    7084 round_trippers.go:580]     Audit-Id: af4c111c-5449-448c-bc2a-7839b743148d
	I0923 13:36:54.958186    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:54.958504    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:55.454788    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:55.454788    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:55.454788    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:55.454788    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:55.460057    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:36:55.460057    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:55.460143    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:55 GMT
	I0923 13:36:55.460143    7084 round_trippers.go:580]     Audit-Id: 413faf3d-678b-4919-8dec-646b5b5d93a3
	I0923 13:36:55.460143    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:55.460143    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:55.460143    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:55.460143    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:55.460286    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:55.954447    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:55.954447    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:55.954447    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:55.954447    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:55.959162    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:55.959162    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:55.959162    7084 round_trippers.go:580]     Audit-Id: 39172ad5-1dce-42cc-acb7-c1735d7beb52
	I0923 13:36:55.959162    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:55.959162    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:55.959162    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:55.959162    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:55.959162    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:56 GMT
	I0923 13:36:55.959354    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:56.455686    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:56.455686    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:56.455686    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:56.455686    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:56.460001    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:56.460001    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:56.460001    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:56 GMT
	I0923 13:36:56.460001    7084 round_trippers.go:580]     Audit-Id: 90716a7f-723d-4893-90ba-10842a20441d
	I0923 13:36:56.460001    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:56.460001    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:56.460001    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:56.460001    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:56.460001    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:56.460532    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:56.954053    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:56.954053    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:56.954053    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:56.954053    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:56.958535    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:56.958740    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:56.958740    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:56.958864    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:56.958864    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:56.958864    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:56.958864    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:57 GMT
	I0923 13:36:56.958864    7084 round_trippers.go:580]     Audit-Id: d372752f-c22b-491d-a79f-da84862073ba
	I0923 13:36:56.959151    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:57.454425    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:57.454425    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:57.454425    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:57.454425    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:57.457926    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:57.457926    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:57.458605    7084 round_trippers.go:580]     Audit-Id: cf551955-4c7a-4c6f-ba3f-74b38f64d78b
	I0923 13:36:57.458605    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:57.458605    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:57.458605    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:57.458605    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:57.458605    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:57 GMT
	I0923 13:36:57.458779    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:57.954737    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:57.954737    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:57.954737    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:57.954737    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:57.958831    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:57.958831    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:57.958831    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:57.958831    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:57.958831    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:57.958831    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:58 GMT
	I0923 13:36:57.959046    7084 round_trippers.go:580]     Audit-Id: e7295743-45c0-4181-b492-618e701d4626
	I0923 13:36:57.959046    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:57.959187    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:58.455414    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:58.455489    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:58.455489    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:58.455489    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:58.459918    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:58.459994    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:58.459994    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:58.459994    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:58.459994    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:58.459994    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:58 GMT
	I0923 13:36:58.460081    7084 round_trippers.go:580]     Audit-Id: 3729aed3-fb12-4437-8da9-530bc322500e
	I0923 13:36:58.460081    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:58.460305    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:58.460994    7084 node_ready.go:53] node "multinode-560300-m02" has status "Ready":"False"
	I0923 13:36:58.955035    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:58.955035    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:58.955035    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:58.955035    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:58.959117    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:58.959117    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:58.959117    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:58.959117    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:58.959117    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:58.959334    7084 round_trippers.go:580]     Audit-Id: f7f787d3-83b4-4592-a7d2-98c8750e72f9
	I0923 13:36:58.959334    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:58.959334    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:58.959526    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2003","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4066 chars]
	I0923 13:36:59.454377    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:36:59.454377    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.454377    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.454377    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.458362    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:59.458362    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.458362    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.458362    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.458362    7084 round_trippers.go:580]     Audit-Id: bb1c7dc7-d8ca-40c2-a5bc-42a9c743cb0a
	I0923 13:36:59.458362    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.458362    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.458362    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.458362    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2012","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0923 13:36:59.459488    7084 node_ready.go:49] node "multinode-560300-m02" has status "Ready":"True"
	I0923 13:36:59.459576    7084 node_ready.go:38] duration metric: took 16.5053179s for node "multinode-560300-m02" to be "Ready" ...
	I0923 13:36:59.459576    7084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:36:59.459756    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:36:59.459756    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.459844    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.459844    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.464077    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:36:59.464077    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.464077    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.464077    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.464077    7084 round_trippers.go:580]     Audit-Id: 78b592f8-978b-4512-b295-3a9fa37787b1
	I0923 13:36:59.464077    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.464077    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.464077    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.465585    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2015"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89511 chars]
	I0923 13:36:59.471115    7084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.471115    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:36:59.471115    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.471115    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.471115    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.473671    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.473671    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.473671    7084 round_trippers.go:580]     Audit-Id: 8f475265-4aac-4bb7-b416-86d707aa0689
	I0923 13:36:59.473671    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.473671    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.473671    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.473671    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.473671    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.473671    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0923 13:36:59.474672    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:36:59.474672    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.474672    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.474672    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.477649    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.477649    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.477649    7084 round_trippers.go:580]     Audit-Id: 9e40a977-cd45-428a-9719-87e015046368
	I0923 13:36:59.477649    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.477649    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.477649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.477649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.477649    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.477858    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:36:59.478315    7084 pod_ready.go:93] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"True"
	I0923 13:36:59.478387    7084 pod_ready.go:82] duration metric: took 7.2718ms for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.478387    7084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.478512    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:36:59.478512    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.478512    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.478512    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.480889    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.480889    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.480889    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.480889    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.480889    7084 round_trippers.go:580]     Audit-Id: 69843a5d-4ad7-4ca2-80ee-75bb64f4f1c6
	I0923 13:36:59.480968    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.480968    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.480968    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.481103    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1822","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0923 13:36:59.481589    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:36:59.481589    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.481589    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.481589    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.483888    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.483888    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.483888    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.483888    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.483888    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.483888    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.483888    7084 round_trippers.go:580]     Audit-Id: ec2df474-0040-4a8a-9edd-8cc51bcc38d0
	I0923 13:36:59.483888    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.483888    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:36:59.484523    7084 pod_ready.go:93] pod "etcd-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:36:59.484594    7084 pod_ready.go:82] duration metric: took 6.1667ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.484594    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.484712    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:36:59.484712    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.484712    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.484712    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.487169    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.487169    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.487169    7084 round_trippers.go:580]     Audit-Id: 5721af0c-b8f9-456a-9288-99902124c5de
	I0923 13:36:59.487169    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.487169    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.487169    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.487169    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.487169    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.487169    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"c88cb5c4-fe30-4354-bf55-1f281cf11190","resourceVersion":"1816","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.156.56:8443","kubernetes.io/config.hash":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.mirror":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.seen":"2024-09-23T13:34:12.942044692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0923 13:36:59.487864    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:36:59.487935    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.487935    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.487972    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.490041    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.490950    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.490950    7084 round_trippers.go:580]     Audit-Id: 46fa7b0b-bf01-4d80-a06d-8db181bc1f02
	I0923 13:36:59.490950    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.490950    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.491019    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.491019    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.491019    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.491160    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:36:59.491809    7084 pod_ready.go:93] pod "kube-apiserver-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:36:59.491881    7084 pod_ready.go:82] duration metric: took 7.2865ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.491964    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.491964    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:36:59.491964    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.491964    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.491964    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.494519    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.494519    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.494519    7084 round_trippers.go:580]     Audit-Id: 7aeb8129-1fa5-483b-9e0e-436cbe38148e
	I0923 13:36:59.494519    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.494519    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.494519    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.494519    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.494519    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.494519    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"1809","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0923 13:36:59.495515    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:36:59.495515    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.495515    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.495515    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.498649    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:36:59.498649    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.498649    7084 round_trippers.go:580]     Audit-Id: d325e9e5-f3a9-42ab-8475-c71e1d61bde5
	I0923 13:36:59.498649    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.498649    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.498649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.498649    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.498649    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.498819    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:36:59.499181    7084 pod_ready.go:93] pod "kube-controller-manager-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:36:59.499181    7084 pod_ready.go:82] duration metric: took 7.2168ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.499181    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:36:59.654679    7084 request.go:632] Waited for 155.4256ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:36:59.654679    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:36:59.654679    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.654679    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.654679    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.657963    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:59.657963    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.658260    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.658260    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:36:59 GMT
	I0923 13:36:59.658260    7084 round_trippers.go:580]     Audit-Id: 54b6ecbb-80eb-4ca0-a52c-27d0523ed777
	I0923 13:36:59.658260    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.658260    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.658260    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.658607    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dbkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"44a5a18e-0e93-4293-8d4b-13e3ec9acfef","resourceVersion":"1660","creationTimestamp":"2024-09-23T13:20:08Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:20:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0923 13:36:59.855474    7084 request.go:632] Waited for 196.3273ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:36:59.855694    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:36:59.855694    7084 round_trippers.go:469] Request Headers:
	I0923 13:36:59.855694    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:36:59.855694    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:36:59.859370    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:36:59.859370    7084 round_trippers.go:577] Response Headers:
	I0923 13:36:59.859370    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:36:59.859370    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:36:59.859370    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:36:59.859370    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:36:59.859370    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:36:59.859370    7084 round_trippers.go:580]     Audit-Id: db6acff9-50d7-4256-a88a-190f0cde17e3
	I0923 13:36:59.859569    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"781efd95-4e81-4850-a300-9cef56c5e6d4","resourceVersion":"1852","creationTimestamp":"2024-09-23T13:30:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_30_01_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:30:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4401 chars]
	I0923 13:36:59.860021    7084 pod_ready.go:98] node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:36:59.860021    7084 pod_ready.go:82] duration metric: took 360.8151ms for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	E0923 13:36:59.860021    7084 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-560300-m03" hosting pod "kube-proxy-dbkdp" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-560300-m03" has status "Ready":"Unknown"
	I0923 13:36:59.860091    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.054755    7084 request.go:632] Waited for 194.6504ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:37:00.054755    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:37:00.054755    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.054755    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.054755    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.058701    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:37:00.058701    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.058701    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.058701    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:37:00.058701    7084 round_trippers.go:580]     Audit-Id: 8084e2c0-d695-4523-b553-c56c17152654
	I0923 13:37:00.058701    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.058701    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.058701    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.058989    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"1982","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0923 13:37:00.254684    7084 request.go:632] Waited for 195.1237ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:37:00.254684    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:37:00.254684    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.254684    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.254684    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.258754    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:37:00.258987    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.258987    7084 round_trippers.go:580]     Audit-Id: b172aac1-9df1-4847-8d93-262f13940f95
	I0923 13:37:00.258987    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.258987    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.258987    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.258987    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.258987    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:37:00.259320    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2012","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0923 13:37:00.259320    7084 pod_ready.go:93] pod "kube-proxy-g5t97" in "kube-system" namespace has status "Ready":"True"
	I0923 13:37:00.259320    7084 pod_ready.go:82] duration metric: took 399.2022ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.259320    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.454884    7084 request.go:632] Waited for 194.9867ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:37:00.454884    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:37:00.454884    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.454884    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.454884    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.460096    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:37:00.460244    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.460244    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.460311    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.460311    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.460311    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.460311    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:37:00.460311    7084 round_trippers.go:580]     Audit-Id: 856191c6-b9bd-4446-a4df-e0ae34422995
	I0923 13:37:00.460922    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"1800","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0923 13:37:00.655345    7084 request.go:632] Waited for 193.48ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:37:00.655624    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:37:00.655624    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.655624    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.655624    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.659250    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:37:00.659250    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.659250    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:00 GMT
	I0923 13:37:00.659250    7084 round_trippers.go:580]     Audit-Id: d8267fb9-6eed-4c10-87b0-42be2b27c4a1
	I0923 13:37:00.659250    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.659250    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.659250    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.659250    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.659827    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:37:00.660525    7084 pod_ready.go:93] pod "kube-proxy-rgmcw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:37:00.660618    7084 pod_ready.go:82] duration metric: took 401.2706ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.660618    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:00.855392    7084 request.go:632] Waited for 194.5968ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:37:00.855392    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:37:00.855392    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:00.855948    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:00.855948    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:00.859535    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:37:00.859744    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:00.859744    7084 round_trippers.go:580]     Audit-Id: 6b492a2c-3f2b-4c7d-b1bb-ddcc422ada3a
	I0923 13:37:00.859744    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:00.859744    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:00.859744    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:00.859744    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:00.859744    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:01 GMT
	I0923 13:37:00.859941    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"1810","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0923 13:37:01.054747    7084 request.go:632] Waited for 194.2927ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:37:01.054747    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:37:01.054747    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:01.054747    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:01.054747    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:01.058965    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:37:01.058965    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:01.059508    7084 round_trippers.go:580]     Audit-Id: e18a1540-d0c9-4090-b41f-5cbdc6ab80fd
	I0923 13:37:01.059508    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:01.059508    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:01.059508    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:01.059508    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:01.059508    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:01 GMT
	I0923 13:37:01.059710    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:37:01.060112    7084 pod_ready.go:93] pod "kube-scheduler-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:37:01.060195    7084 pod_ready.go:82] duration metric: took 399.467ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:37:01.060195    7084 pod_ready.go:39] duration metric: took 1.6005102s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:37:01.060195    7084 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:37:01.069491    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:37:01.094864    7084 system_svc.go:56] duration metric: took 34.6672ms WaitForService to wait for kubelet
	I0923 13:37:01.094864    7084 kubeadm.go:582] duration metric: took 18.411296s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:37:01.094864    7084 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:37:01.255537    7084 request.go:632] Waited for 160.6621ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes
	I0923 13:37:01.255537    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes
	I0923 13:37:01.255537    7084 round_trippers.go:469] Request Headers:
	I0923 13:37:01.255537    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:37:01.255537    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:37:01.261569    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:37:01.261673    7084 round_trippers.go:577] Response Headers:
	I0923 13:37:01.261673    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:37:01 GMT
	I0923 13:37:01.261673    7084 round_trippers.go:580]     Audit-Id: c944582e-c764-4693-87ac-9d216cc055d3
	I0923 13:37:01.261762    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:37:01.261762    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:37:01.261762    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:37:01.261762    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:37:01.261891    7084 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2018"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15608 chars]
	I0923 13:37:01.262542    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:37:01.262542    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:37:01.262542    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:37:01.262542    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:37:01.262542    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:37:01.262542    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:37:01.262542    7084 node_conditions.go:105] duration metric: took 167.6663ms to run NodePressure ...
	I0923 13:37:01.262542    7084 start.go:241] waiting for startup goroutines ...
	I0923 13:37:01.263063    7084 start.go:255] writing updated cluster config ...
	I0923 13:37:01.266908    7084 out.go:201] 
	I0923 13:37:01.270063    7084 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:37:01.281074    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:37:01.281074    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:37:01.287435    7084 out.go:177] * Starting "multinode-560300-m03" worker node in "multinode-560300" cluster
	I0923 13:37:01.289774    7084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 13:37:01.289774    7084 cache.go:56] Caching tarball of preloaded images
	I0923 13:37:01.289774    7084 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 13:37:01.289774    7084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 13:37:01.289774    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:37:01.298995    7084 start.go:360] acquireMachinesLock for multinode-560300-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 13:37:01.299492    7084 start.go:364] duration metric: took 496.4µs to acquireMachinesLock for "multinode-560300-m03"
	I0923 13:37:01.299618    7084 start.go:96] Skipping create...Using existing machine configuration
	I0923 13:37:01.299649    7084 fix.go:54] fixHost starting: m03
	I0923 13:37:01.299745    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:03.121132    7084 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 13:37:03.122017    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:03.122075    7084 fix.go:112] recreateIfNeeded on multinode-560300-m03: state=Stopped err=<nil>
	W0923 13:37:03.122075    7084 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 13:37:03.125875    7084 out.go:177] * Restarting existing hyperv VM for "multinode-560300-m03" ...
	I0923 13:37:03.127697    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-560300-m03
	I0923 13:37:05.867779    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:05.868419    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:05.868419    7084 main.go:141] libmachine: Waiting for host to start...
	I0923 13:37:05.868419    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:07.848897    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:07.848897    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:07.849040    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:10.010052    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:10.010052    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:11.010634    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:12.938704    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:12.938704    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:12.938880    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:15.131010    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:15.131010    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:16.131814    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:18.034804    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:18.035315    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:18.035367    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:20.202934    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:20.202934    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:21.203323    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:23.129870    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:23.129870    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:23.130744    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:25.305058    7084 main.go:141] libmachine: [stdout =====>] : 
	I0923 13:37:25.305344    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:26.305730    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:28.244684    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:28.244684    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:28.244748    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:30.573163    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:30.574154    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:30.576308    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:32.434487    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:32.434487    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:32.434901    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:34.660736    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:34.660901    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:34.661245    7084 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300\config.json ...
	I0923 13:37:34.664535    7084 machine.go:93] provisionDockerMachine start ...
	I0923 13:37:34.664679    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:36.500120    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:36.500120    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:36.501259    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:38.714540    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:38.714589    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:38.717773    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:37:38.718495    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:37:38.718495    7084 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 13:37:38.854847    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 13:37:38.854847    7084 buildroot.go:166] provisioning hostname "multinode-560300-m03"
	I0923 13:37:38.854847    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:40.662730    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:40.663733    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:40.663733    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:42.869224    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:42.869224    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:42.873221    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:37:42.873599    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:37:42.873669    7084 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-560300-m03 && echo "multinode-560300-m03" | sudo tee /etc/hostname
	I0923 13:37:43.039310    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-560300-m03
	
	I0923 13:37:43.039394    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:44.885728    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:44.885728    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:44.885728    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:47.096980    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:47.097067    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:47.101955    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:37:47.102865    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:37:47.102933    7084 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-560300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-560300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-560300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 13:37:47.252655    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 13:37:47.252655    7084 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0923 13:37:47.252655    7084 buildroot.go:174] setting up certificates
	I0923 13:37:47.252655    7084 provision.go:84] configureAuth start
	I0923 13:37:47.252655    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:49.078490    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:49.078848    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:49.078848    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:51.338272    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:51.338272    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:51.338932    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:53.167516    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:53.167516    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:53.168462    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:55.397846    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:55.397846    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:55.397846    7084 provision.go:143] copyHostCerts
	I0923 13:37:55.397846    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0923 13:37:55.397846    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0923 13:37:55.397846    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0923 13:37:55.398502    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0923 13:37:55.399133    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0923 13:37:55.399133    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0923 13:37:55.399133    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0923 13:37:55.399852    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 13:37:55.400336    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0923 13:37:55.400856    7084 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0923 13:37:55.400856    7084 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0923 13:37:55.401015    7084 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 13:37:55.401820    7084 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-560300-m03 san=[127.0.0.1 172.19.145.249 localhost minikube multinode-560300-m03]
	I0923 13:37:55.527453    7084 provision.go:177] copyRemoteCerts
	I0923 13:37:55.533864    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 13:37:55.533864    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:37:57.349394    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:37:57.349394    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:57.349394    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:37:59.533723    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:37:59.534366    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:37:59.534848    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:37:59.654320    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1191516s)
	I0923 13:37:59.654373    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 13:37:59.654735    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 13:37:59.698483    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 13:37:59.698669    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0923 13:37:59.739774    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 13:37:59.740337    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 13:37:59.782006    7084 provision.go:87] duration metric: took 12.5285055s to configureAuth
	I0923 13:37:59.782006    7084 buildroot.go:189] setting minikube options for container-runtime
	I0923 13:37:59.782534    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:37:59.782630    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:01.648883    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:01.649103    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:01.649103    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:03.899457    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:03.899528    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:03.902921    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:03.903517    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:03.903517    7084 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 13:38:04.052590    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 13:38:04.052643    7084 buildroot.go:70] root file system type: tmpfs
	I0923 13:38:04.052880    7084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 13:38:04.052928    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:05.888625    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:05.888625    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:05.888715    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:08.142028    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:08.142988    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:08.148270    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:08.148867    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:08.148867    7084 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.156.56"
	Environment="NO_PROXY=172.19.156.56,172.19.147.0"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 13:38:08.310605    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.156.56
	Environment=NO_PROXY=172.19.156.56,172.19.147.0
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 13:38:08.310605    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:10.129666    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:10.129666    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:10.130128    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:12.317476    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:12.318234    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:12.321900    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:12.322602    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:12.322602    7084 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 13:38:14.553169    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 13:38:14.553169    7084 machine.go:96] duration metric: took 39.8858618s to provisionDockerMachine
	I0923 13:38:14.553169    7084 start.go:293] postStartSetup for "multinode-560300-m03" (driver="hyperv")
	I0923 13:38:14.553169    7084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 13:38:14.562368    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 13:38:14.562368    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:16.376794    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:16.377199    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:16.377199    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:18.594118    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:18.594118    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:18.594118    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:38:18.703705    7084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.1410577s)
	I0923 13:38:18.711993    7084 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 13:38:18.721226    7084 command_runner.go:130] > NAME=Buildroot
	I0923 13:38:18.721226    7084 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 13:38:18.721226    7084 command_runner.go:130] > ID=buildroot
	I0923 13:38:18.721226    7084 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 13:38:18.721226    7084 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 13:38:18.721226    7084 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 13:38:18.721226    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0923 13:38:18.721226    7084 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0923 13:38:18.721796    7084 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> 38442.pem in /etc/ssl/certs
	I0923 13:38:18.721796    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /etc/ssl/certs/38442.pem
	I0923 13:38:18.732265    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 13:38:18.747039    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /etc/ssl/certs/38442.pem (1708 bytes)
	I0923 13:38:18.791824    7084 start.go:296] duration metric: took 4.238368s for postStartSetup
	I0923 13:38:18.791882    7084 fix.go:56] duration metric: took 1m17.4870071s for fixHost
	I0923 13:38:18.791971    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:20.600512    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:20.600512    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:20.600814    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:22.815507    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:22.816075    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:22.819176    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:22.819756    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:22.819756    7084 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 13:38:22.952098    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727098703.161859191
	
	I0923 13:38:22.952165    7084 fix.go:216] guest clock: 1727098703.161859191
	I0923 13:38:22.952165    7084 fix.go:229] Guest: 2024-09-23 13:38:23.161859191 +0000 UTC Remote: 2024-09-23 13:38:18.7918821 +0000 UTC m=+357.767357601 (delta=4.369977091s)
	I0923 13:38:22.952239    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:24.747348    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:24.747348    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:24.747348    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:26.913173    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:26.913173    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:26.917172    7084 main.go:141] libmachine: Using SSH client type: native
	I0923 13:38:26.918172    7084 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10f1bc0] 0x10f4700 <nil>  [] 0s} 172.19.145.249 22 <nil> <nil>}
	I0923 13:38:26.918172    7084 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1727098702
	I0923 13:38:27.077868    7084 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Sep 23 13:38:22 UTC 2024
	
	I0923 13:38:27.077868    7084 fix.go:236] clock set: Mon Sep 23 13:38:22 UTC 2024
	 (err=<nil>)
	I0923 13:38:27.077868    7084 start.go:83] releasing machines lock for "multinode-560300-m03", held for 1m25.7725906s
	I0923 13:38:27.078174    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:28.912657    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:28.913641    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:28.913704    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:31.111619    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:31.111619    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:31.113793    7084 out.go:177] * Found network options:
	I0923 13:38:31.117147    7084 out.go:177]   - NO_PROXY=172.19.156.56,172.19.147.0
	W0923 13:38:31.118828    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:38:31.118828    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:38:31.122256    7084 out.go:177]   - NO_PROXY=172.19.156.56,172.19.147.0
	W0923 13:38:31.124702    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:38:31.125669    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:38:31.126561    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	W0923 13:38:31.126561    7084 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 13:38:31.128139    7084 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 13:38:31.128139    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:31.134783    7084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 13:38:31.134783    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:33.030475    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:33.030475    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:33.030475    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:33.034424    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:33.034424    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:33.034424    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:35.311215    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:35.311215    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:35.312090    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:38:35.333848    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:35.334671    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:35.334714    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:38:35.414476    7084 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0923 13:38:35.414505    7084 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.2794336s)
	W0923 13:38:35.414505    7084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 13:38:35.423340    7084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 13:38:35.427558    7084 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0923 13:38:35.427975    7084 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.2991279s)
	W0923 13:38:35.428006    7084 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 13:38:35.456172    7084 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 13:38:35.456233    7084 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 13:38:35.456233    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:38:35.456410    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:38:35.485994    7084 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 13:38:35.496227    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 13:38:35.525772    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0923 13:38:35.541938    7084 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0923 13:38:35.541993    7084 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 13:38:35.545085    7084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 13:38:35.553979    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 13:38:35.584163    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:38:35.610591    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 13:38:35.636377    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 13:38:35.665034    7084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 13:38:35.692446    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 13:38:35.717576    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 13:38:35.746623    7084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 13:38:35.772585    7084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 13:38:35.791047    7084 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:38:35.791202    7084 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 13:38:35.799442    7084 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 13:38:35.828323    7084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 13:38:35.851345    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:36.050417    7084 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 13:38:36.082211    7084 start.go:495] detecting cgroup driver to use...
	I0923 13:38:36.093434    7084 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 13:38:36.113188    7084 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 13:38:36.113226    7084 command_runner.go:130] > [Unit]
	I0923 13:38:36.113226    7084 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 13:38:36.113226    7084 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 13:38:36.113264    7084 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 13:38:36.113264    7084 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 13:38:36.113264    7084 command_runner.go:130] > StartLimitBurst=3
	I0923 13:38:36.113264    7084 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 13:38:36.113264    7084 command_runner.go:130] > [Service]
	I0923 13:38:36.113264    7084 command_runner.go:130] > Type=notify
	I0923 13:38:36.113264    7084 command_runner.go:130] > Restart=on-failure
	I0923 13:38:36.113317    7084 command_runner.go:130] > Environment=NO_PROXY=172.19.156.56
	I0923 13:38:36.113317    7084 command_runner.go:130] > Environment=NO_PROXY=172.19.156.56,172.19.147.0
	I0923 13:38:36.113317    7084 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 13:38:36.113317    7084 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 13:38:36.113375    7084 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 13:38:36.113375    7084 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 13:38:36.113375    7084 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 13:38:36.113375    7084 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 13:38:36.113438    7084 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 13:38:36.113438    7084 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 13:38:36.113438    7084 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 13:38:36.113438    7084 command_runner.go:130] > ExecStart=
	I0923 13:38:36.113438    7084 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0923 13:38:36.113438    7084 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 13:38:36.113531    7084 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 13:38:36.113531    7084 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 13:38:36.113531    7084 command_runner.go:130] > LimitNOFILE=infinity
	I0923 13:38:36.113531    7084 command_runner.go:130] > LimitNPROC=infinity
	I0923 13:38:36.113531    7084 command_runner.go:130] > LimitCORE=infinity
	I0923 13:38:36.113531    7084 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 13:38:36.113599    7084 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 13:38:36.113599    7084 command_runner.go:130] > TasksMax=infinity
	I0923 13:38:36.113599    7084 command_runner.go:130] > TimeoutStartSec=0
	I0923 13:38:36.113599    7084 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 13:38:36.113645    7084 command_runner.go:130] > Delegate=yes
	I0923 13:38:36.113645    7084 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 13:38:36.113645    7084 command_runner.go:130] > KillMode=process
	I0923 13:38:36.113645    7084 command_runner.go:130] > [Install]
	I0923 13:38:36.113645    7084 command_runner.go:130] > WantedBy=multi-user.target
	I0923 13:38:36.121945    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:38:36.151090    7084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 13:38:36.188835    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 13:38:36.222288    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:38:36.253757    7084 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 13:38:36.311033    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 13:38:36.331924    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 13:38:36.362635    7084 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 13:38:36.372575    7084 ssh_runner.go:195] Run: which cri-dockerd
	I0923 13:38:36.378049    7084 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 13:38:36.386730    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 13:38:36.403592    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 13:38:36.442733    7084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 13:38:36.624828    7084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 13:38:36.797658    7084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 13:38:36.797658    7084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 13:38:36.841670    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:37.029671    7084 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 13:38:39.631170    7084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6012424s)
	I0923 13:38:39.640913    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 13:38:39.668955    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:38:39.698916    7084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 13:38:39.873959    7084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 13:38:40.051348    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:40.248692    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 13:38:40.282026    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 13:38:40.312081    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:40.494059    7084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 13:38:40.590945    7084 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 13:38:40.602990    7084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 13:38:40.615108    7084 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 13:38:40.615108    7084 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 13:38:40.615108    7084 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0923 13:38:40.615108    7084 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 13:38:40.615108    7084 command_runner.go:130] > Access: 2024-09-23 13:38:40.736151606 +0000
	I0923 13:38:40.615108    7084 command_runner.go:130] > Modify: 2024-09-23 13:38:40.736151606 +0000
	I0923 13:38:40.615108    7084 command_runner.go:130] > Change: 2024-09-23 13:38:40.740151448 +0000
	I0923 13:38:40.615108    7084 command_runner.go:130] >  Birth: -
	I0923 13:38:40.615108    7084 start.go:563] Will wait 60s for crictl version
	I0923 13:38:40.624166    7084 ssh_runner.go:195] Run: which crictl
	I0923 13:38:40.628868    7084 command_runner.go:130] > /usr/bin/crictl
	I0923 13:38:40.639595    7084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 13:38:40.684274    7084 command_runner.go:130] > Version:  0.1.0
	I0923 13:38:40.684274    7084 command_runner.go:130] > RuntimeName:  docker
	I0923 13:38:40.684274    7084 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 13:38:40.684274    7084 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 13:38:40.684274    7084 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 13:38:40.693556    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:38:40.720059    7084 command_runner.go:130] > 27.3.0
	I0923 13:38:40.730664    7084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 13:38:40.761424    7084 command_runner.go:130] > 27.3.0
	I0923 13:38:40.767197    7084 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 13:38:40.770417    7084 out.go:177]   - env NO_PROXY=172.19.156.56
	I0923 13:38:40.773415    7084 out.go:177]   - env NO_PROXY=172.19.156.56,172.19.147.0
	I0923 13:38:40.774998    7084 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0923 13:38:40.778975    7084 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0923 13:38:40.778975    7084 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0923 13:38:40.778975    7084 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0923 13:38:40.778975    7084 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:20:d9:6d Flags:up|broadcast|multicast|running}
	I0923 13:38:40.781695    7084 ip.go:214] interface addr: fe80::ec32:5a84:d1ad:defb/64
	I0923 13:38:40.781695    7084 ip.go:214] interface addr: 172.19.144.1/20
	I0923 13:38:40.791028    7084 ssh_runner.go:195] Run: grep 172.19.144.1	host.minikube.internal$ /etc/hosts
	I0923 13:38:40.798144    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.144.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:38:40.820222    7084 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:38:40.820964    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:38:40.821476    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:38:42.635928    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:42.635928    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:42.636342    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:38:42.636947    7084 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-560300 for IP: 172.19.145.249
	I0923 13:38:42.637011    7084 certs.go:194] generating shared ca certs ...
	I0923 13:38:42.637011    7084 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 13:38:42.637554    7084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0923 13:38:42.637752    7084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0923 13:38:42.637752    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 13:38:42.637752    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 13:38:42.638311    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 13:38:42.638404    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 13:38:42.638865    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem (1338 bytes)
	W0923 13:38:42.639097    7084 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844_empty.pem, impossibly tiny 0 bytes
	I0923 13:38:42.639174    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 13:38:42.639407    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0923 13:38:42.639595    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 13:38:42.639781    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 13:38:42.639781    7084 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem (1708 bytes)
	I0923 13:38:42.640310    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem -> /usr/share/ca-certificates/3844.pem
	I0923 13:38:42.640406    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem -> /usr/share/ca-certificates/38442.pem
	I0923 13:38:42.640557    7084 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:42.640725    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 13:38:42.687509    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 13:38:42.729244    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 13:38:42.772828    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 13:38:42.815440    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\3844.pem --> /usr/share/ca-certificates/3844.pem (1338 bytes)
	I0923 13:38:42.861558    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\38442.pem --> /usr/share/ca-certificates/38442.pem (1708 bytes)
	I0923 13:38:42.903530    7084 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 13:38:42.961912    7084 ssh_runner.go:195] Run: openssl version
	I0923 13:38:42.971947    7084 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 13:38:42.981406    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38442.pem && ln -fs /usr/share/ca-certificates/38442.pem /etc/ssl/certs/38442.pem"
	I0923 13:38:43.009975    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38442.pem
	I0923 13:38:43.018070    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:38:43.018152    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 11:34 /usr/share/ca-certificates/38442.pem
	I0923 13:38:43.027949    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38442.pem
	I0923 13:38:43.038880    7084 command_runner.go:130] > 3ec20f2e
	I0923 13:38:43.046778    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/38442.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 13:38:43.076351    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 13:38:43.111997    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:43.118533    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:43.118533    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:11 /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:43.129738    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 13:38:43.139279    7084 command_runner.go:130] > b5213941
	I0923 13:38:43.147492    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 13:38:43.174275    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3844.pem && ln -fs /usr/share/ca-certificates/3844.pem /etc/ssl/certs/3844.pem"
	I0923 13:38:43.203636    7084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3844.pem
	I0923 13:38:43.211564    7084 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:38:43.211564    7084 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 11:34 /usr/share/ca-certificates/3844.pem
	I0923 13:38:43.220668    7084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3844.pem
	I0923 13:38:43.229152    7084 command_runner.go:130] > 51391683
	I0923 13:38:43.237267    7084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3844.pem /etc/ssl/certs/51391683.0"
	I0923 13:38:43.266182    7084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 13:38:43.272295    7084 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:38:43.272295    7084 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 13:38:43.272652    7084 kubeadm.go:934] updating node {m03 172.19.145.249 0 v1.31.1  false true} ...
	I0923 13:38:43.272652    7084 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-560300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.145.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 13:38:43.281424    7084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 13:38:43.299642    7084 command_runner.go:130] > kubeadm
	I0923 13:38:43.299642    7084 command_runner.go:130] > kubectl
	I0923 13:38:43.299642    7084 command_runner.go:130] > kubelet
	I0923 13:38:43.299716    7084 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 13:38:43.307016    7084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0923 13:38:43.322541    7084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0923 13:38:43.351644    7084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 13:38:43.393421    7084 ssh_runner.go:195] Run: grep 172.19.156.56	control-plane.minikube.internal$ /etc/hosts
	I0923 13:38:43.399601    7084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.156.56	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 13:38:43.430444    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:38:43.610013    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:38:43.636859    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:38:43.637688    7084 start.go:317] joinCluster: &{Name:multinode-560300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-560300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.156.56 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.147.0 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 13:38:43.637688    7084 start.go:330] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:38:43.637688    7084 host.go:66] Checking if "multinode-560300-m03" exists ...
	I0923 13:38:43.638620    7084 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:38:43.638880    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:38:43.639592    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:38:45.523193    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:45.523193    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:45.523193    7084 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:38:45.523805    7084 api_server.go:166] Checking apiserver status ...
	I0923 13:38:45.532625    7084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:38:45.533144    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:38:47.405620    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:47.405620    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:47.405761    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:49.598332    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:38:49.598332    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:49.599189    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:38:49.702991    7084 command_runner.go:130] > 1960
	I0923 13:38:49.703097    7084 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.1701901s)
	I0923 13:38:49.716052    7084 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup
	W0923 13:38:49.733761    7084 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1960/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:38:49.746078    7084 ssh_runner.go:195] Run: ls
	I0923 13:38:49.753090    7084 api_server.go:253] Checking apiserver healthz at https://172.19.156.56:8443/healthz ...
	I0923 13:38:49.760095    7084 api_server.go:279] https://172.19.156.56:8443/healthz returned 200:
	ok
	I0923 13:38:49.768024    7084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-560300-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0923 13:38:49.912268    7084 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-z9mrc, kube-system/kube-proxy-dbkdp
	I0923 13:38:49.914509    7084 command_runner.go:130] > node/multinode-560300-m03 cordoned
	I0923 13:38:49.915099    7084 command_runner.go:130] > node/multinode-560300-m03 drained
	I0923 13:38:49.915099    7084 node.go:128] successfully drained node "multinode-560300-m03"
	I0923 13:38:49.915207    7084 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0923 13:38:49.915207    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:38:51.777127    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:51.777127    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:51.777475    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:54.024513    7084 main.go:141] libmachine: [stdout =====>] : 172.19.145.249
	
	I0923 13:38:54.024513    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:54.024513    7084 sshutil.go:53] new ssh client: &{IP:172.19.145.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m03\id_rsa Username:docker}
	I0923 13:38:54.421750    7084 command_runner.go:130] ! W0923 13:38:54.642210    1583 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0923 13:38:54.640777    7084 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Stopping the kubelet service
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0923 13:38:54.640881    7084 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0923 13:38:54.640996    7084 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0923 13:38:54.640996    7084 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0923 13:38:54.640996    7084 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0923 13:38:54.640996    7084 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0923 13:38:54.640996    7084 command_runner.go:130] > to reset your system's IPVS tables.
	I0923 13:38:54.641062    7084 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0923 13:38:54.641062    7084 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0923 13:38:54.641062    7084 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.7255353s)
	I0923 13:38:54.641062    7084 node.go:155] successfully reset node "multinode-560300-m03"
	I0923 13:38:54.642174    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:38:54.642754    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:38:54.643857    7084 request.go:1351] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0923 13:38:54.644200    7084 round_trippers.go:463] DELETE https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:38:54.644284    7084 round_trippers.go:469] Request Headers:
	I0923 13:38:54.644284    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:38:54.644284    7084 round_trippers.go:473]     Content-Type: application/json
	I0923 13:38:54.644284    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:38:54.662355    7084 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0923 13:38:54.662355    7084 round_trippers.go:577] Response Headers:
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Audit-Id: c457f45c-7ca2-41ac-a655-3b46b43d0d24
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:38:54.662355    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:38:54.662355    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Content-Length: 171
	I0923 13:38:54.662355    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:38:54 GMT
	I0923 13:38:54.662355    7084 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-560300-m03","kind":"nodes","uid":"781efd95-4e81-4850-a300-9cef56c5e6d4"}}
	I0923 13:38:54.662355    7084 node.go:180] successfully deleted node "multinode-560300-m03"
	I0923 13:38:54.662355    7084 start.go:334] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:38:54.662355    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0923 13:38:54.662355    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:38:56.497598    7084 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:38:56.498352    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:56.498467    7084 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:38:58.687812    7084 main.go:141] libmachine: [stdout =====>] : 172.19.156.56
	
	I0923 13:38:58.687812    7084 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:38:58.688784    7084 sshutil.go:53] new ssh client: &{IP:172.19.156.56 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:38:59.052349    7084 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token hf8qg0.xpq656vak932fgac --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 
	I0923 13:38:59.052562    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3899105s)
	I0923 13:38:59.052562    7084 start.go:343] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:38:59.052562    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hf8qg0.xpq656vak932fgac --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m03"
	I0923 13:38:59.114513    7084 command_runner.go:130] > [preflight] Running pre-flight checks
	I0923 13:38:59.268397    7084 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0923 13:38:59.268470    7084 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0923 13:38:59.330880    7084 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 13:38:59.330880    7084 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 13:38:59.330880    7084 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 13:38:59.523901    7084 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 13:39:00.028729    7084 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 504.579739ms
	I0923 13:39:00.028729    7084 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0923 13:39:00.558000    7084 command_runner.go:130] > This node has joined the cluster:
	I0923 13:39:00.558035    7084 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0923 13:39:00.558035    7084 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0923 13:39:00.558035    7084 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0923 13:39:00.561267    7084 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 13:39:00.561752    7084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token hf8qg0.xpq656vak932fgac --discovery-token-ca-cert-hash sha256:82721203d8ac1a614dd507214dad5b55398d844ac450753554c89ef3ee4caf97 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-560300-m03": (1.5090883s)
	I0923 13:39:00.561752    7084 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0923 13:39:00.750913    7084 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0923 13:39:00.935767    7084 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-560300-m03 minikube.k8s.io/updated_at=2024_09_23T13_39_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=multinode-560300 minikube.k8s.io/primary=false
	I0923 13:39:01.065038    7084 command_runner.go:130] > node/multinode-560300-m03 labeled
	I0923 13:39:01.065224    7084 start.go:319] duration metric: took 17.4262999s to joinCluster
	I0923 13:39:01.065429    7084 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.19.145.249 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0923 13:39:01.065886    7084 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:39:01.070555    7084 out.go:177] * Verifying Kubernetes components...
	I0923 13:39:01.081792    7084 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 13:39:01.274835    7084 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 13:39:01.300275    7084 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 13:39:01.300901    7084 kapi.go:59] client config for multinode-560300: &rest.Config{Host:"https://172.19.156.56:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-560300\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27cbc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 13:39:01.300901    7084 node_ready.go:35] waiting up to 6m0s for node "multinode-560300-m03" to be "Ready" ...
	I0923 13:39:01.300901    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:01.300901    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:01.300901    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:01.300901    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:01.305300    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:01.305382    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:01.305382    7084 round_trippers.go:580]     Audit-Id: 6c7b89ba-77f8-4db5-bca3-cf1cac9ceb4b
	I0923 13:39:01.305382    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:01.305382    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:01.305382    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:01.305382    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:01.305382    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:01 GMT
	I0923 13:39:01.305799    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2163","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3398 chars]
	I0923 13:39:01.801788    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:01.801788    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:01.801788    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:01.801788    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:01.806113    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:01.806113    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:01.806193    7084 round_trippers.go:580]     Audit-Id: b9da55c1-165f-4a4b-bc85-12ee303b27e3
	I0923 13:39:01.806193    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:01.806193    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:01.806265    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:01.806281    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:01.806361    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:02 GMT
	I0923 13:39:01.807222    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:02.301372    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:02.301372    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:02.301372    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:02.301372    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:02.305895    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:02.305994    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:02.305994    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:02.305994    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:02.305994    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:02 GMT
	I0923 13:39:02.305994    7084 round_trippers.go:580]     Audit-Id: 5e7b1a1a-7a85-4acc-af83-77921694dfa9
	I0923 13:39:02.305994    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:02.305994    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:02.306146    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:02.801779    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:02.801779    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:02.801779    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:02.801779    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:02.805018    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:02.805514    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:02.805514    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:03 GMT
	I0923 13:39:02.805514    7084 round_trippers.go:580]     Audit-Id: 48c50f1a-0700-44e1-8ce5-51387ef7bb82
	I0923 13:39:02.805514    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:02.805514    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:02.805514    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:02.805514    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:02.805885    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:03.301297    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:03.301297    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:03.301297    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:03.301297    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:03.311715    7084 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 13:39:03.311715    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:03.311715    7084 round_trippers.go:580]     Audit-Id: d29ebcc2-f0b0-42a5-b9ff-6f927ea0173c
	I0923 13:39:03.311715    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:03.311715    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:03.311715    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:03.311715    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:03.311715    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:03 GMT
	I0923 13:39:03.311715    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:03.312701    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:03.802425    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:03.802497    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:03.802497    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:03.802572    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:03.806131    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:03.806236    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:03.806236    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:03.806236    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:03.806236    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:03.806236    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:03.806236    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:04 GMT
	I0923 13:39:03.806236    7084 round_trippers.go:580]     Audit-Id: 905ed989-1830-4144-9816-d0753d71301a
	I0923 13:39:03.806502    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:04.302080    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:04.302080    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:04.302080    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:04.302080    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:04.305237    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:04.305237    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:04.305237    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:04.305237    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:04.305237    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:04 GMT
	I0923 13:39:04.305237    7084 round_trippers.go:580]     Audit-Id: 4d469e44-374d-43dd-843c-4d9e179a836d
	I0923 13:39:04.305237    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:04.305237    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:04.306144    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:04.801763    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:04.801763    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:04.801763    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:04.801763    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:04.806515    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:04.806515    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:04.806515    7084 round_trippers.go:580]     Audit-Id: 543e686f-a1b6-4af6-b7cf-a8708492fc2c
	I0923 13:39:04.806515    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:04.806662    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:04.806662    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:04.806662    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:04.806662    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:05 GMT
	I0923 13:39:04.806966    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:05.301915    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:05.301915    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:05.301915    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:05.301915    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:05.305320    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:05.305320    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:05.305320    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:05.305320    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:05.305320    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:05 GMT
	I0923 13:39:05.305320    7084 round_trippers.go:580]     Audit-Id: d971b20a-330b-4556-a0a8-ede400c41717
	I0923 13:39:05.305320    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:05.305320    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:05.305858    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:05.801376    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:05.801376    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:05.801376    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:05.801376    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:05.805306    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:05.805306    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:05.805306    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:05.805306    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:05.805306    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:05.805306    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:05.805306    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:06 GMT
	I0923 13:39:05.805306    7084 round_trippers.go:580]     Audit-Id: cd16fb3d-7813-4246-8f01-ba774eb50efb
	I0923 13:39:05.805659    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:05.806016    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:06.302776    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:06.302776    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:06.302776    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:06.302776    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:06.306847    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:06.306847    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:06.306847    7084 round_trippers.go:580]     Audit-Id: 2e894385-736c-4d81-aacc-f25b1356cb46
	I0923 13:39:06.306946    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:06.306946    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:06.306946    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:06.306946    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:06.306946    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:06 GMT
	I0923 13:39:06.307168    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:06.801991    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:06.801991    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:06.801991    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:06.801991    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:06.806028    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:06.806028    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:06.806170    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:06.806170    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:06.806170    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:06.806170    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:07 GMT
	I0923 13:39:06.806170    7084 round_trippers.go:580]     Audit-Id: f20adb46-f07e-4181-a4bb-af0963ed8b79
	I0923 13:39:06.806170    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:06.806369    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:07.301608    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:07.302023    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:07.302023    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:07.302023    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:07.305095    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:07.305193    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:07.305193    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:07 GMT
	I0923 13:39:07.305193    7084 round_trippers.go:580]     Audit-Id: 8c5ce651-5c0e-4cc4-aa71-cb4c99891551
	I0923 13:39:07.305193    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:07.305193    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:07.305193    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:07.305193    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:07.305323    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:07.801756    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:07.801756    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:07.801756    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:07.801756    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:07.806149    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:07.806149    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:07.806149    7084 round_trippers.go:580]     Audit-Id: 9610f864-f956-4ed7-a6a0-853cd50a1d9d
	I0923 13:39:07.806149    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:07.806149    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:07.806149    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:07.806149    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:07.806149    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:08 GMT
	I0923 13:39:07.806810    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:07.807143    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:08.301774    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:08.301774    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:08.301774    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:08.301774    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:08.307201    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:39:08.307201    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:08.307201    7084 round_trippers.go:580]     Audit-Id: e4e1c153-39f7-4f3b-ba20-786ddcd29c0f
	I0923 13:39:08.307201    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:08.307201    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:08.307201    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:08.307201    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:08.307396    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:08 GMT
	I0923 13:39:08.307547    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:08.802415    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:08.802415    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:08.802415    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:08.802415    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:08.807021    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:08.807136    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:08.807136    7084 round_trippers.go:580]     Audit-Id: 00a89292-003b-46a9-bb25-09b1d65e4635
	I0923 13:39:08.807136    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:08.807136    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:08.807136    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:08.807136    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:08.807136    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:09 GMT
	I0923 13:39:08.807345    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:09.302187    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:09.302402    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:09.302402    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:09.302402    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:09.304985    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:09.304985    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:09.304985    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:09.304985    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:09.304985    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:09.304985    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:09 GMT
	I0923 13:39:09.305854    7084 round_trippers.go:580]     Audit-Id: eaaec5c4-cac4-4af9-9311-0a6c6e7b2925
	I0923 13:39:09.305854    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:09.306089    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:09.801702    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:09.801702    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:09.801702    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:09.801702    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:09.806616    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:09.806679    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:09.806679    7084 round_trippers.go:580]     Audit-Id: cff918b5-8b0d-4484-9490-26bdb9a3c7a3
	I0923 13:39:09.806679    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:09.806679    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:09.806679    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:09.806679    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:09.806679    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:10 GMT
	I0923 13:39:09.806887    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:09.807295    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:10.302434    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:10.302434    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:10.302434    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:10.302434    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:10.306833    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:10.306867    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:10.306867    7084 round_trippers.go:580]     Audit-Id: 8e0c3172-dc67-421b-9302-aaa35642f607
	I0923 13:39:10.306867    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:10.306867    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:10.306867    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:10.306867    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:10.306867    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:10 GMT
	I0923 13:39:10.307010    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2166","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3507 chars]
	I0923 13:39:10.801626    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:10.801626    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:10.801626    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:10.801626    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:10.806200    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:10.806200    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:10.806200    7084 round_trippers.go:580]     Audit-Id: b6789c4c-4ad9-4703-b740-4bd0300a7c3a
	I0923 13:39:10.806200    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:10.806200    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:10.806200    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:10.806200    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:10.806200    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:11 GMT
	I0923 13:39:10.806200    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:11.302500    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:11.302500    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:11.302500    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:11.302500    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:11.305869    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:11.305869    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:11.305869    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:11.305869    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:11.305869    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:11.305869    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:11 GMT
	I0923 13:39:11.305869    7084 round_trippers.go:580]     Audit-Id: 1b6fdfec-0889-44be-a3eb-eb42ddffd487
	I0923 13:39:11.305869    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:11.305869    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:11.802775    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:11.802775    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:11.802775    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:11.802775    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:11.806706    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:11.806706    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:11.806706    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:11.806706    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:11.806706    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:11.806706    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:11.806706    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:12 GMT
	I0923 13:39:11.806706    7084 round_trippers.go:580]     Audit-Id: 44ec42a2-73c4-4533-9417-76ba9fe90e18
	I0923 13:39:11.806706    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:12.302581    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:12.303191    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:12.303191    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:12.303273    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:12.309524    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:39:12.309524    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:12.309524    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:12.309524    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:12.309524    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:12 GMT
	I0923 13:39:12.309524    7084 round_trippers.go:580]     Audit-Id: 5d742447-04b1-4142-b591-c83247d021af
	I0923 13:39:12.309524    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:12.309524    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:12.309524    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:12.310264    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:12.802217    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:12.802217    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:12.802217    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:12.802217    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:12.805420    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:12.805857    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:12.805857    7084 round_trippers.go:580]     Audit-Id: 4d395a35-34ed-4653-96ad-e1c629306345
	I0923 13:39:12.805857    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:12.805857    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:12.805857    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:12.805857    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:12.805857    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:13 GMT
	I0923 13:39:12.806022    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:13.302675    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:13.302675    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:13.302675    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:13.302675    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:13.305492    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:13.305492    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:13.305492    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:13.305492    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:13.305492    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:13.305492    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:13.305492    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:13 GMT
	I0923 13:39:13.305492    7084 round_trippers.go:580]     Audit-Id: dd1d2ec0-f1d1-49f8-881c-a91cf9e0d5dc
	I0923 13:39:13.305640    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:13.802483    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:13.802483    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:13.802483    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:13.802483    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:13.806514    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:13.806514    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:13.806514    7084 round_trippers.go:580]     Audit-Id: a092f448-7eee-47a1-b62a-1bd91f1f9a2f
	I0923 13:39:13.806514    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:13.806514    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:13.806514    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:13.806514    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:13.806514    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:14 GMT
	I0923 13:39:13.806514    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:14.302494    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:14.302494    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:14.302494    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:14.302494    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:14.305585    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:14.305925    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:14.305925    7084 round_trippers.go:580]     Audit-Id: 691de75a-90da-4b1f-98f1-edf74effc4ab
	I0923 13:39:14.305995    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:14.305995    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:14.305995    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:14.305995    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:14.305995    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:14 GMT
	I0923 13:39:14.306296    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:14.802504    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:14.802504    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:14.802504    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:14.802504    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:14.806894    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:14.806894    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:14.806894    7084 round_trippers.go:580]     Audit-Id: 7f1bc79e-8ac1-4b74-8634-59962b043abc
	I0923 13:39:14.806894    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:14.806894    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:14.806894    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:14.806894    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:14.807010    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:15 GMT
	I0923 13:39:14.807157    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:14.807157    7084 node_ready.go:53] node "multinode-560300-m03" has status "Ready":"False"
	I0923 13:39:15.303389    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:15.303515    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.303515    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.303552    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.307839    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:15.307839    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.307839    7084 round_trippers.go:580]     Audit-Id: a92e2b6d-e33e-4f77-8a78-a2363c02d6ba
	I0923 13:39:15.307839    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.307839    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.307839    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.307839    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.307839    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:15 GMT
	I0923 13:39:15.307839    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2188","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3899 chars]
	I0923 13:39:15.802777    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:15.802777    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.802777    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.803118    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.806455    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:15.806538    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.806538    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.806538    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.806538    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.806538    7084 round_trippers.go:580]     Audit-Id: 89772002-8cec-4c23-8e4b-55d8bbefd8e4
	I0923 13:39:15.806538    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.806538    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.806688    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2198","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3765 chars]
	I0923 13:39:15.807090    7084 node_ready.go:49] node "multinode-560300-m03" has status "Ready":"True"
	I0923 13:39:15.807172    7084 node_ready.go:38] duration metric: took 14.5052916s for node "multinode-560300-m03" to be "Ready" ...
	I0923 13:39:15.807172    7084 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:39:15.807279    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods
	I0923 13:39:15.807334    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.807334    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.807334    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.812491    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:39:15.812491    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.812491    7084 round_trippers.go:580]     Audit-Id: e53f799d-d2fd-4534-a90e-d77aad9b34b9
	I0923 13:39:15.812491    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.812491    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.812491    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.812491    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.812491    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.813213    7084 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2198"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89061 chars]
	I0923 13:39:15.820328    7084 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.820506    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-glx94
	I0923 13:39:15.820620    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.820620    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.820875    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.824923    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:15.825011    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.825011    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.825011    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.825087    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.825107    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.825107    7084 round_trippers.go:580]     Audit-Id: 1c49bccb-45db-48c2-a04f-e334daf6d282
	I0923 13:39:15.825107    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.825405    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-glx94","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"f476c8f8-667a-48d4-84f8-4aa15336cea9","resourceVersion":"1844","creationTimestamp":"2024-09-23T13:13:00Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3da69dd4-925a-49d7-873e-76f727162fe7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:13:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3da69dd4-925a-49d7-873e-76f727162fe7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7044 chars]
	I0923 13:39:15.826244    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:15.826244    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.826244    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.826357    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.831083    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:15.831083    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.831083    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.831083    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.831083    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.831083    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.831083    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.831083    7084 round_trippers.go:580]     Audit-Id: 728cc23c-93a6-41ec-87c5-4d147551db78
	I0923 13:39:15.831785    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:15.831833    7084 pod_ready.go:93] pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:15.831833    7084 pod_ready.go:82] duration metric: took 11.5037ms for pod "coredns-7c65d6cfc9-glx94" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.831833    7084 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.831833    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-560300
	I0923 13:39:15.831833    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.831833    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.831833    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.834775    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:15.834775    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.834775    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.834775    7084 round_trippers.go:580]     Audit-Id: 6e6a643e-4a1a-4abb-b2ae-17892adf9749
	I0923 13:39:15.834775    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.834775    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.834775    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.834775    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.834775    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-560300","namespace":"kube-system","uid":"477ee4f5-e333-4042-97cd-8457f60fd696","resourceVersion":"1822","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.156.56:2379","kubernetes.io/config.hash":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.mirror":"5355b6b4959bf084aa219664777184d4","kubernetes.io/config.seen":"2024-09-23T13:34:12.988417729Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6606 chars]
	I0923 13:39:15.835776    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:15.835776    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.835776    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.835776    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.841444    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:39:15.841444    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.841444    7084 round_trippers.go:580]     Audit-Id: 3124890c-6864-41ce-8059-de10f04da53b
	I0923 13:39:15.841444    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.841444    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.841444    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.841444    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.841444    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.841444    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:15.842073    7084 pod_ready.go:93] pod "etcd-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:15.842143    7084 pod_ready.go:82] duration metric: took 10.2401ms for pod "etcd-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.842143    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.842208    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-560300
	I0923 13:39:15.842266    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.842266    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.842266    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.844038    7084 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 13:39:15.844038    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.844038    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.844038    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.844038    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.844038    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.844038    7084 round_trippers.go:580]     Audit-Id: 00e9142a-c3ee-458b-932e-8e8130d14f2e
	I0923 13:39:15.844038    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.844038    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-560300","namespace":"kube-system","uid":"c88cb5c4-fe30-4354-bf55-1f281cf11190","resourceVersion":"1816","creationTimestamp":"2024-09-23T13:34:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.156.56:8443","kubernetes.io/config.hash":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.mirror":"f26f0ad234b91b3692c2863cf9f943a6","kubernetes.io/config.seen":"2024-09-23T13:34:12.942044692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:34:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8039 chars]
	I0923 13:39:15.844038    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:15.844038    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.844038    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.844038    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.848577    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:15.848645    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.848645    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.848645    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.848645    7084 round_trippers.go:580]     Audit-Id: d60d0ce8-e27d-4a4a-a0f3-a19929b45063
	I0923 13:39:15.848645    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.848645    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.848645    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.848795    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:15.849236    7084 pod_ready.go:93] pod "kube-apiserver-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:15.849236    7084 pod_ready.go:82] duration metric: took 7.0934ms for pod "kube-apiserver-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.849236    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.849370    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-560300
	I0923 13:39:15.849370    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.849370    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.849370    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.854279    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:15.854398    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.854398    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.854398    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.854398    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.854398    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.854398    7084 round_trippers.go:580]     Audit-Id: 1c709d99-bb05-4d16-9dc9-75a89ad2ce85
	I0923 13:39:15.854398    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.854398    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-560300","namespace":"kube-system","uid":"aa0d358b-19fd-4553-8a34-f772ba945019","resourceVersion":"1809","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.mirror":"068ac3e29c5a22bd6a0a27377f2fa904","kubernetes.io/config.seen":"2024-09-23T13:12:54.655473592Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0923 13:39:15.854990    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:15.854990    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:15.854990    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:15.854990    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:15.857192    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:15.857192    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:15.857192    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:15.857192    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:15.857192    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:15.857192    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:15.857192    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:15.857192    7084 round_trippers.go:580]     Audit-Id: 3518da6e-cfc6-4e75-990b-5c0863cce8ee
	I0923 13:39:15.857192    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:15.857192    7084 pod_ready.go:93] pod "kube-controller-manager-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:15.857192    7084 pod_ready.go:82] duration metric: took 7.8842ms for pod "kube-controller-manager-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:15.857192    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.004737    7084 request.go:632] Waited for 147.535ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:39:16.004737    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dbkdp
	I0923 13:39:16.004737    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.004737    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.004737    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.007308    7084 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 13:39:16.008121    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.008121    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:16.008121    7084 round_trippers.go:580]     Audit-Id: eb17622b-a2c2-4c55-a979-f780feda10c7
	I0923 13:39:16.008121    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.008121    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.008121    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.008121    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.008589    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-dbkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"44a5a18e-0e93-4293-8d4b-13e3ec9acfef","resourceVersion":"2173","creationTimestamp":"2024-09-23T13:20:08Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:20:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6208 chars]
	I0923 13:39:16.203204    7084 request.go:632] Waited for 193.6269ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:16.203668    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m03
	I0923 13:39:16.203732    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.203803    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.203803    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.210164    7084 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 13:39:16.210164    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.210164    7084 round_trippers.go:580]     Audit-Id: c10d2841-3348-4969-b637-c9bea3d265ba
	I0923 13:39:16.210164    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.210164    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.210164    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.210164    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.210164    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:16.210782    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m03","uid":"09b97ffd-4d14-443f-ac38-b89a5f91ddd1","resourceVersion":"2198","creationTimestamp":"2024-09-23T13:39:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_39_00_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:39:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3765 chars]
	I0923 13:39:16.210782    7084 pod_ready.go:93] pod "kube-proxy-dbkdp" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:16.210782    7084 pod_ready.go:82] duration metric: took 353.5659ms for pod "kube-proxy-dbkdp" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.211308    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.403089    7084 request.go:632] Waited for 191.6361ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:39:16.403370    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g5t97
	I0923 13:39:16.403405    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.403405    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.403405    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.406835    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:16.406914    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.406914    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.406914    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:16.406978    7084 round_trippers.go:580]     Audit-Id: 44d20b8c-f627-4d13-aaa5-db2488b8f6e3
	I0923 13:39:16.407004    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.407004    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.407086    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.407332    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-g5t97","generateName":"kube-proxy-","namespace":"kube-system","uid":"49d7601a-bda4-421e-bde7-acc35c157962","resourceVersion":"1982","creationTimestamp":"2024-09-23T13:15:47Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:15:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6198 chars]
	I0923 13:39:16.603654    7084 request.go:632] Waited for 195.6092ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:39:16.603654    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300-m02
	I0923 13:39:16.603654    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.603654    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.603654    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.608421    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:16.608632    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.608632    7084 round_trippers.go:580]     Audit-Id: a70481b8-8669-4697-bcf2-5c0d6c5dda29
	I0923 13:39:16.608632    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.608632    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.608632    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.608632    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.608726    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:16 GMT
	I0923 13:39:16.608897    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300-m02","uid":"d5474ea1-12de-4dc6-8912-2196daa169f2","resourceVersion":"2019","creationTimestamp":"2024-09-23T13:36:42Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T13_36_42_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:36:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3812 chars]
	I0923 13:39:16.609638    7084 pod_ready.go:93] pod "kube-proxy-g5t97" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:16.609698    7084 pod_ready.go:82] duration metric: took 398.3631ms for pod "kube-proxy-g5t97" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.609698    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:16.803185    7084 request.go:632] Waited for 193.3841ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:39:16.803185    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rgmcw
	I0923 13:39:16.803185    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:16.803185    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:16.803185    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:16.807151    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:16.807151    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:16.807151    7084 round_trippers.go:580]     Audit-Id: 28902986-fb2d-43ed-bcf9-f07a74a30942
	I0923 13:39:16.807151    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:16.807299    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:16.807299    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:16.807299    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:16.807299    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:16.807471    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rgmcw","generateName":"kube-proxy-","namespace":"kube-system","uid":"97050e09-6fc3-4e7b-b00e-07eb9332bf15","resourceVersion":"1800","creationTimestamp":"2024-09-23T13:12:59Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a140510d-1b4d-4719-ba46-a22a5774102e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a140510d-1b4d-4719-ba46-a22a5774102e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6400 chars]
	I0923 13:39:17.003159    7084 request.go:632] Waited for 194.8437ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:17.003159    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:17.003159    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:17.003159    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:17.003159    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:17.007405    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:17.007405    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:17.007469    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:17.007469    7084 round_trippers.go:580]     Audit-Id: e549d9ed-fcb2-4313-b016-547d40f67021
	I0923 13:39:17.007469    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:17.007469    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:17.007469    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:17.007469    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:17.007684    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:17.008110    7084 pod_ready.go:93] pod "kube-proxy-rgmcw" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:17.008110    7084 pod_ready.go:82] duration metric: took 398.3847ms for pod "kube-proxy-rgmcw" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:17.008110    7084 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:17.203844    7084 request.go:632] Waited for 195.7204ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:39:17.203844    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-560300
	I0923 13:39:17.203844    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:17.203844    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:17.203844    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:17.208442    7084 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 13:39:17.208442    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:17.208442    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:17.208442    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:17.208442    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:17.208442    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:17.208442    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:17.208442    7084 round_trippers.go:580]     Audit-Id: e1d718d8-a3f3-4254-9872-19a09c9ff30f
	I0923 13:39:17.208750    7084 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-560300","namespace":"kube-system","uid":"01e5d6a3-2eb6-4fa4-8607-072724fb2880","resourceVersion":"1810","creationTimestamp":"2024-09-23T13:12:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.mirror":"6c1e940c9d426af6457d0b82f4ff98b0","kubernetes.io/config.seen":"2024-09-23T13:12:54.655474492Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T13:12:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0923 13:39:17.403679    7084 request.go:632] Waited for 194.2833ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:17.403679    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes/multinode-560300
	I0923 13:39:17.403679    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:17.403679    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:17.403679    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:17.407575    7084 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 13:39:17.407638    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:17.407638    7084 round_trippers.go:580]     Audit-Id: e20a951f-e7f4-49aa-b191-ab8ffdb57297
	I0923 13:39:17.407638    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:17.407638    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:17.407638    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:17.407703    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:17.407703    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:17.408166    7084 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T13:12:51Z","fieldsType":"FieldsV1","f [truncated 5237 chars]
	I0923 13:39:17.408692    7084 pod_ready.go:93] pod "kube-scheduler-multinode-560300" in "kube-system" namespace has status "Ready":"True"
	I0923 13:39:17.408774    7084 pod_ready.go:82] duration metric: took 400.6367ms for pod "kube-scheduler-multinode-560300" in "kube-system" namespace to be "Ready" ...
	I0923 13:39:17.408774    7084 pod_ready.go:39] duration metric: took 1.6014934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 13:39:17.408857    7084 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 13:39:17.418690    7084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:39:17.442609    7084 system_svc.go:56] duration metric: took 33.7494ms WaitForService to wait for kubelet
	I0923 13:39:17.442609    7084 kubeadm.go:582] duration metric: took 16.3760175s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 13:39:17.442609    7084 node_conditions.go:102] verifying NodePressure condition ...
	I0923 13:39:17.603223    7084 request.go:632] Waited for 160.6034ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.156.56:8443/api/v1/nodes
	I0923 13:39:17.603223    7084 round_trippers.go:463] GET https://172.19.156.56:8443/api/v1/nodes
	I0923 13:39:17.603223    7084 round_trippers.go:469] Request Headers:
	I0923 13:39:17.603223    7084 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 13:39:17.603223    7084 round_trippers.go:473]     Accept: application/json, */*
	I0923 13:39:17.608840    7084 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 13:39:17.608968    7084 round_trippers.go:577] Response Headers:
	I0923 13:39:17.608968    7084 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c6f182b6-26f4-4c6b-97b9-7541c368c547
	I0923 13:39:17.609049    7084 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a549cd88-9375-4627-b882-b9422d8b046a
	I0923 13:39:17.609049    7084 round_trippers.go:580]     Date: Mon, 23 Sep 2024 13:39:17 GMT
	I0923 13:39:17.609049    7084 round_trippers.go:580]     Audit-Id: 4024b7fe-cde6-4346-868f-42a7dcb51386
	I0923 13:39:17.609049    7084 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 13:39:17.609049    7084 round_trippers.go:580]     Content-Type: application/json
	I0923 13:39:17.609049    7084 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2200"},"items":[{"metadata":{"name":"multinode-560300","uid":"ca2e19d0-5c32-477c-b08c-70e73dca3b0c","resourceVersion":"1829","creationTimestamp":"2024-09-23T13:12:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-560300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-560300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T13_12_55_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14732 chars]
	I0923 13:39:17.610334    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:39:17.610334    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:39:17.610334    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:39:17.610334    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:39:17.610334    7084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 13:39:17.610334    7084 node_conditions.go:123] node cpu capacity is 2
	I0923 13:39:17.610334    7084 node_conditions.go:105] duration metric: took 167.7137ms to run NodePressure ...
	I0923 13:39:17.610334    7084 start.go:241] waiting for startup goroutines ...
	I0923 13:39:17.610334    7084 start.go:255] writing updated cluster config ...
	I0923 13:39:17.620931    7084 ssh_runner.go:195] Run: rm -f paused
	I0923 13:39:17.737351    7084 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 13:39:17.742327    7084 out.go:177] * Done! kubectl is now configured to use "multinode-560300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.186993180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.187010284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.187132416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.294876931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.295095989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.295130598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.295240826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 cri-dockerd[1353]: time="2024-09-23T13:34:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c745d868b64c4532f8ad5bdebcbcc9ee100dae012e0ca3795632542a6b06e49/resolv.conf as [nameserver 172.19.144.1]"
	Sep 23 13:34:35 multinode-560300 cri-dockerd[1353]: time="2024-09-23T13:34:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/351b966363b271c4c844f2f95f249bab933c1dd7c4da616e5cbeabc560539187/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.604933988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.605406710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.605437318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.605673079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.751684842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.751743457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.751755561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:35 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:35.751849885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:34:50 multinode-560300 dockerd[1084]: time="2024-09-23T13:34:50.674654594Z" level=info msg="ignoring event" container=865debd751d9213807787fbbbd437ea058c6838f1690b4c94703a34e6bc419bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 13:34:50 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:50.675320938Z" level=info msg="shim disconnected" id=865debd751d9213807787fbbbd437ea058c6838f1690b4c94703a34e6bc419bc namespace=moby
	Sep 23 13:34:50 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:50.675376342Z" level=warning msg="cleaning up after shim disconnected" id=865debd751d9213807787fbbbd437ea058c6838f1690b4c94703a34e6bc419bc namespace=moby
	Sep 23 13:34:50 multinode-560300 dockerd[1090]: time="2024-09-23T13:34:50.675386143Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 13:35:06 multinode-560300 dockerd[1090]: time="2024-09-23T13:35:06.238139739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 13:35:06 multinode-560300 dockerd[1090]: time="2024-09-23T13:35:06.238221145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 13:35:06 multinode-560300 dockerd[1090]: time="2024-09-23T13:35:06.238235446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 13:35:06 multinode-560300 dockerd[1090]: time="2024-09-23T13:35:06.239178617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17566040b9804       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   0ff72b1dec7fd       storage-provisioner
	1875788bf6c4f       8c811b4aec35f                                                                                         5 minutes ago       Running             busybox                   1                   351b966363b27       busybox-7dff88458-wwgwh
	609a4fd1025a6       c69fa2e9cbf5f                                                                                         5 minutes ago       Running             coredns                   1                   9c745d868b64c       coredns-7c65d6cfc9-glx94
	3f8f7c342259d       12968670680f4                                                                                         6 minutes ago       Running             kindnet-cni               1                   df461afcdc9bf       kindnet-mdnmc
	865debd751d92       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   0ff72b1dec7fd       storage-provisioner
	b35e7e3038b34       60c005f310ff3                                                                                         6 minutes ago       Running             kube-proxy                1                   bd858198f9515       kube-proxy-rgmcw
	413a6df004359       6bab7719df100                                                                                         6 minutes ago       Running             kube-apiserver            0                   78a98649ec3e5       kube-apiserver-multinode-560300
	dd2c109781ba7       2e96e5913fc06                                                                                         6 minutes ago       Running             etcd                      0                   081a66a1431bc       etcd-multinode-560300
	95c3c32cc98ce       175ffd71cce3d                                                                                         6 minutes ago       Running             kube-controller-manager   1                   6021c04207bdf       kube-controller-manager-multinode-560300
	b3f4f9c6259d7       9aa1fad941575                                                                                         6 minutes ago       Running             kube-scheduler            1                   ab97e1f22bda9       kube-scheduler-multinode-560300
	78de2657becad       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   f294b19f20ba1       busybox-7dff88458-wwgwh
	648460d0f31f3       c69fa2e9cbf5f                                                                                         27 minutes ago      Exited              coredns                   0                   eb12eb8fe1eab       coredns-7c65d6cfc9-glx94
	a83589d1098af       kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166              27 minutes ago      Exited              kindnet-cni               0                   0f322d00a55b9       kindnet-mdnmc
	c92a84f5caf22       60c005f310ff3                                                                                         27 minutes ago      Exited              kube-proxy                0                   cf2fc1e617749       kube-proxy-rgmcw
	117d706d07d2f       9aa1fad941575                                                                                         27 minutes ago      Exited              kube-scheduler            0                   b160f7a7a5d22       kube-scheduler-multinode-560300
	03ce0954301e2       175ffd71cce3d                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   67b7e79ad6b59       kube-controller-manager-multinode-560300
	
	
	==> coredns [609a4fd1025a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 84be67bfc79374dbf0f7b1050900d3b4b08d81a78db730aed13edbe839abc3cb2446f0d06c08690ac53a97ad9f5103fd82097eeb4b4696d252f023888848e6e0
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40678 - 27872 "HINFO IN 6316078708195576795.4122069032706466927. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.055088755s
	
	
	==> coredns [648460d0f31f] <==
	[INFO] 10.244.0.3:38681 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000058704s
	[INFO] 10.244.0.3:52711 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127209s
	[INFO] 10.244.0.3:54030 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000224916s
	[INFO] 10.244.0.3:55333 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000045404s
	[INFO] 10.244.0.3:49850 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079305s
	[INFO] 10.244.0.3:54603 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043103s
	[INFO] 10.244.0.3:56551 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014271s
	[INFO] 10.244.1.2:45863 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113008s
	[INFO] 10.244.1.2:36717 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085106s
	[INFO] 10.244.1.2:43150 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082206s
	[INFO] 10.244.1.2:34236 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197714s
	[INFO] 10.244.0.3:37601 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112108s
	[INFO] 10.244.0.3:60698 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000178513s
	[INFO] 10.244.0.3:35977 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068705s
	[INFO] 10.244.0.3:54979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114608s
	[INFO] 10.244.1.2:58051 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107208s
	[INFO] 10.244.1.2:36408 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000226517s
	[INFO] 10.244.1.2:33973 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210915s
	[INFO] 10.244.1.2:45767 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000104007s
	[INFO] 10.244.0.3:36090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125109s
	[INFO] 10.244.0.3:46993 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240317s
	[INFO] 10.244.0.3:40120 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087606s
	[INFO] 10.244.0.3:46564 - 5 "PTR IN 1.144.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080205s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-560300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-560300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-560300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T13_12_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-560300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:40:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:39:35 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:39:35 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:39:35 +0000   Mon, 23 Sep 2024 13:12:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:39:35 +0000   Mon, 23 Sep 2024 13:34:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.156.56
	  Hostname:    multinode-560300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 62b2b5f3fb144947abe480d0f65b087c
	  System UUID:                d1328c2e-dfd4-f844-981c-cc7a85ce582e
	  Boot ID:                    6117261d-ee87-4a2f-8732-d0e777a92cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wwgwh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-7c65d6cfc9-glx94                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-multinode-560300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m8s
	  kube-system                 kindnet-mdnmc                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-multinode-560300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-multinode-560300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-rgmcw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-560300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 27m                    kube-proxy       
	  Normal  Starting                 6m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-560300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-560300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-560300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-560300 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-560300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-560300 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                    node-controller  Node multinode-560300 event: Registered Node multinode-560300 in Controller
	  Normal  NodeReady                27m                    kubelet          Node multinode-560300 status is now: NodeReady
	  Normal  Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node multinode-560300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s (x8 over 6m14s)  kubelet          Node multinode-560300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node multinode-560300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m6s                   node-controller  Node multinode-560300 event: Registered Node multinode-560300 in Controller
	
	
	Name:               multinode-560300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-560300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-560300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T13_36_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 13:36:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-560300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 13:40:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 13:36:59 +0000   Mon, 23 Sep 2024 13:36:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 13:36:59 +0000   Mon, 23 Sep 2024 13:36:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 13:36:59 +0000   Mon, 23 Sep 2024 13:36:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 13:36:59 +0000   Mon, 23 Sep 2024 13:36:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.147.0
	  Hostname:    multinode-560300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a21f0feca74349b2b9042ee744adfa2a
	  System UUID:                05b2789d-962f-ff45-a09c-66a2273cfcfc
	  Boot ID:                    911ea883-1447-4a97-be79-edd6379e1e0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9m52c    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kindnet-qg99z              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-proxy-g5t97           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  Starting                 24m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)      kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)      kubelet          Node multinode-560300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)      kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                24m                    kubelet          Node multinode-560300-m02 status is now: NodeReady
	  Normal  Starting                 3m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m45s (x2 over 3m45s)  kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x2 over 3m45s)  kubelet          Node multinode-560300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x2 over 3m45s)  kubelet          Node multinode-560300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m41s                  node-controller  Node multinode-560300-m02 event: Registered Node multinode-560300-m02 in Controller
	  Normal  NodeReady                3m28s                  kubelet          Node multinode-560300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.040305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.730835] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.961525] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.266991] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep23 13:33] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.151411] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[Sep23 13:34] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +0.101723] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.480456] systemd-fstab-generator[1050]: Ignoring "noauto" option for root device
	[  +0.173357] systemd-fstab-generator[1062]: Ignoring "noauto" option for root device
	[  +0.195668] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +2.913233] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.201094] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.188180] systemd-fstab-generator[1330]: Ignoring "noauto" option for root device
	[  +0.277734] systemd-fstab-generator[1345]: Ignoring "noauto" option for root device
	[  +0.814367] systemd-fstab-generator[1474]: Ignoring "noauto" option for root device
	[  +0.103991] kauditd_printk_skb: 202 callbacks suppressed
	[  +4.133969] systemd-fstab-generator[1616]: Ignoring "noauto" option for root device
	[  +1.240735] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.793688] kauditd_printk_skb: 30 callbacks suppressed
	[  +3.237606] systemd-fstab-generator[2446]: Ignoring "noauto" option for root device
	[ +12.290864] kauditd_printk_skb: 72 callbacks suppressed
	[ +15.436521] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [dd2c109781ba] <==
	{"level":"info","ts":"2024-09-23T13:34:14.955649Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f91e3c1fba4ebf31","local-member-id":"4a5242b58f83d2a4","added-peer-id":"4a5242b58f83d2a4","added-peer-peer-urls":["https://172.19.153.215:2380"]}
	{"level":"info","ts":"2024-09-23T13:34:14.956044Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f91e3c1fba4ebf31","local-member-id":"4a5242b58f83d2a4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:34:14.956294Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T13:34:14.953679Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:34:14.958573Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T13:34:14.959934Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4a5242b58f83d2a4","initial-advertise-peer-urls":["https://172.19.156.56:2380"],"listen-peer-urls":["https://172.19.156.56:2380"],"advertise-client-urls":["https://172.19.156.56:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.156.56:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T13:34:14.960114Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T13:34:14.960302Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.19.156.56:2380"}
	{"level":"info","ts":"2024-09-23T13:34:14.960424Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.19.156.56:2380"}
	{"level":"info","ts":"2024-09-23T13:34:16.610857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T13:34:16.611022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T13:34:16.611068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 received MsgPreVoteResp from 4a5242b58f83d2a4 at term 2"}
	{"level":"info","ts":"2024-09-23T13:34:16.611089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T13:34:16.611154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 received MsgVoteResp from 4a5242b58f83d2a4 at term 3"}
	{"level":"info","ts":"2024-09-23T13:34:16.611193Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4a5242b58f83d2a4 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T13:34:16.611223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4a5242b58f83d2a4 elected leader 4a5242b58f83d2a4 at term 3"}
	{"level":"info","ts":"2024-09-23T13:34:16.616033Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4a5242b58f83d2a4","local-member-attributes":"{Name:multinode-560300 ClientURLs:[https://172.19.156.56:2379]}","request-path":"/0/members/4a5242b58f83d2a4/attributes","cluster-id":"f91e3c1fba4ebf31","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T13:34:16.616040Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:34:16.616577Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T13:34:16.618615Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T13:34:16.618781Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T13:34:16.620595Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:34:16.620792Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T13:34:16.622201Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.156.56:2379"}
	{"level":"info","ts":"2024-09-23T13:34:16.622584Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:40:27 up 7 min,  0 users,  load average: 0.58, 0.38, 0.18
	Linux multinode-560300 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3f8f7c342259] <==
	I0923 13:39:41.903758       1 main.go:299] handling current node
	I0923 13:39:41.903819       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:39:41.903847       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:39:41.904030       1 main.go:295] Handling node with IPs: map[172.19.145.249:{}]
	I0923 13:39:41.904107       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.2.0/24] 
	I0923 13:39:51.903516       1 main.go:295] Handling node with IPs: map[172.19.156.56:{}]
	I0923 13:39:51.903718       1 main.go:299] handling current node
	I0923 13:39:51.903751       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:39:51.903771       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:39:51.903940       1 main.go:295] Handling node with IPs: map[172.19.145.249:{}]
	I0923 13:39:51.904119       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.2.0/24] 
	I0923 13:40:01.903438       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:40:01.903901       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:40:01.904297       1 main.go:295] Handling node with IPs: map[172.19.145.249:{}]
	I0923 13:40:01.904631       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.2.0/24] 
	I0923 13:40:01.904824       1 main.go:295] Handling node with IPs: map[172.19.156.56:{}]
	I0923 13:40:01.904870       1 main.go:299] handling current node
	I0923 13:40:11.903278       1 main.go:295] Handling node with IPs: map[172.19.156.56:{}]
	I0923 13:40:11.903321       1 main.go:299] handling current node
	I0923 13:40:11.903338       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:40:11.903345       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:40:21.903699       1 main.go:295] Handling node with IPs: map[172.19.156.56:{}]
	I0923 13:40:21.903735       1 main.go:299] handling current node
	I0923 13:40:21.903751       1 main.go:295] Handling node with IPs: map[172.19.147.0:{}]
	I0923 13:40:21.903758       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [a83589d1098a] <==
	I0923 13:31:18.964652       1 main.go:299] handling current node
	I0923 13:31:28.967066       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:31:28.967263       1 main.go:299] handling current node
	I0923 13:31:28.967409       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:31:28.967426       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:31:28.967698       1 main.go:295] Handling node with IPs: map[172.19.154.147:{}]
	I0923 13:31:28.967797       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.3.0/24] 
	I0923 13:31:38.965072       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:31:38.965222       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:31:38.965665       1 main.go:295] Handling node with IPs: map[172.19.154.147:{}]
	I0923 13:31:38.965727       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.3.0/24] 
	I0923 13:31:38.966087       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:31:38.966355       1 main.go:299] handling current node
	I0923 13:31:48.963706       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:31:48.963819       1 main.go:299] handling current node
	I0923 13:31:48.963839       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:31:48.963847       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:31:48.964013       1 main.go:295] Handling node with IPs: map[172.19.154.147:{}]
	I0923 13:31:48.964036       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.3.0/24] 
	I0923 13:31:59.165838       1 main.go:295] Handling node with IPs: map[172.19.153.215:{}]
	I0923 13:31:59.165899       1 main.go:299] handling current node
	I0923 13:31:59.165917       1 main.go:295] Handling node with IPs: map[172.19.147.68:{}]
	I0923 13:31:59.165923       1 main.go:322] Node multinode-560300-m02 has CIDR [10.244.1.0/24] 
	I0923 13:31:59.166052       1 main.go:295] Handling node with IPs: map[172.19.154.147:{}]
	I0923 13:31:59.166058       1 main.go:322] Node multinode-560300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [413a6df00435] <==
	I0923 13:34:18.055474       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 13:34:18.055655       1 policy_source.go:224] refreshing policies
	I0923 13:34:18.073669       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 13:34:18.090604       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 13:34:18.090954       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 13:34:18.091816       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 13:34:18.094579       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 13:34:18.094610       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 13:34:18.098149       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 13:34:18.098259       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 13:34:18.099602       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 13:34:18.100374       1 aggregator.go:171] initial CRD sync complete...
	I0923 13:34:18.100626       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 13:34:18.100748       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 13:34:18.100806       1 cache.go:39] Caches are synced for autoregister controller
	I0923 13:34:18.105250       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0923 13:34:18.895354       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0923 13:34:19.554970       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.156.56]
	I0923 13:34:19.557761       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 13:34:19.575166       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 13:34:20.800845       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 13:34:20.998770       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 13:34:21.019786       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 13:34:21.192544       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 13:34:21.203172       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [03ce0954301e] <==
	I0923 13:29:50.397253       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:29:50.417873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:29:55.019867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:29:55.020720       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:30:00.948213       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-560300-m03\" does not exist"
	I0923 13:30:00.948785       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:30:00.978057       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-560300-m03" podCIDRs=["10.244.3.0/24"]
	I0923 13:30:00.978437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:00.978740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:01.221075       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:01.744630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:04.343091       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:11.080865       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:16.211262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:16.211320       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:30:16.230161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:30:19.317006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:31:44.475013       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:31:44.475847       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:31:44.690768       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:31:49.852885       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:31:59.793582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:31:59.825146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m02"
	I0923 13:31:59.880051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.219202ms"
	I0923 13:31:59.881783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.304µs"
	
	
	==> kube-controller-manager [95c3c32cc98c] <==
	I0923 13:37:06.374183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="39.905µs"
	I0923 13:37:08.347244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.575469ms"
	I0923 13:37:08.347568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="81.009µs"
	I0923 13:38:50.113500       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:38:50.137165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:38:54.896422       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:38:54.898384       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:39:00.425093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:39:00.425191       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-560300-m03\" does not exist"
	I0923 13:39:00.458489       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-560300-m03" podCIDRs=["10.244.2.0/24"]
	I0923 13:39:00.458691       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:00.459149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:00.789071       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:01.293235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:01.635776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:10.563381       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:15.856611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m03"
	I0923 13:39:15.857076       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:15.876115       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:16.625053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:39:35.192518       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300"
	I0923 13:40:01.397803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:40:01.419717       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	I0923 13:40:06.436121       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-560300-m02"
	I0923 13:40:06.436603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-560300-m03"
	
	
	==> kube-proxy [b35e7e3038b3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:34:21.011624       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:34:21.079597       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.156.56"]
	E0923 13:34:21.081635       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:34:21.328765       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:34:21.328818       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:34:21.328844       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:34:21.334895       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:34:21.336491       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:34:21.336556       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:34:21.339773       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:34:21.340668       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:34:21.340787       1 config.go:199] "Starting service config controller"
	I0923 13:34:21.340844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:34:21.341908       1 config.go:328] "Starting node config controller"
	I0923 13:34:21.341987       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:34:21.441988       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:34:21.442051       1 shared_informer.go:320] Caches are synced for node config
	I0923 13:34:21.442074       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c92a84f5caf2] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 13:13:01.510581       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 13:13:01.528211       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.19.153.215"]
	E0923 13:13:01.528393       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 13:13:01.595991       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 13:13:01.596175       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 13:13:01.596207       1 server_linux.go:169] "Using iptables Proxier"
	I0923 13:13:01.601897       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 13:13:01.602395       1 server.go:483] "Version info" version="v1.31.1"
	I0923 13:13:01.602427       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:13:01.610743       1 config.go:199] "Starting service config controller"
	I0923 13:13:01.610798       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 13:13:01.610828       1 config.go:105] "Starting endpoint slice config controller"
	I0923 13:13:01.610834       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 13:13:01.612235       1 config.go:328] "Starting node config controller"
	I0923 13:13:01.612451       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 13:13:01.710868       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 13:13:01.711136       1 shared_informer.go:320] Caches are synced for service config
	I0923 13:13:01.712783       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [117d706d07d2] <==
	E0923 13:12:52.395522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.490447       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 13:12:52.490806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.548160       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.548442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.602117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.602162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.677098       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 13:12:52.677310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.689862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 13:12:52.690136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.707741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 13:12:52.707845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.743202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 13:12:52.743233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.840286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 13:12:52.840633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.860952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 13:12:52.861450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.904935       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 13:12:52.905322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 13:12:52.968156       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 13:12:52.968278       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 13:12:55.111169       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0923 13:32:00.406868       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b3f4f9c6259d] <==
	I0923 13:34:16.021221       1 serving.go:386] Generated self-signed cert in-memory
	W0923 13:34:17.953141       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 13:34:17.953472       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 13:34:17.954760       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 13:34:17.954963       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 13:34:18.091227       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 13:34:18.091282       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 13:34:18.097212       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 13:34:18.100133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 13:34:18.100174       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 13:34:18.100217       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 13:34:18.201238       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 13:36:13 multinode-560300 kubelet[1623]: E0923 13:36:13.086653    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:36:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:36:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:36:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:36:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:37:13 multinode-560300 kubelet[1623]: E0923 13:37:13.087445    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:37:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:37:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:37:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:37:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:38:13 multinode-560300 kubelet[1623]: E0923 13:38:13.087104    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:38:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:38:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:38:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:38:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:39:13 multinode-560300 kubelet[1623]: E0923 13:39:13.085941    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:39:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:39:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:39:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:39:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 13:40:13 multinode-560300 kubelet[1623]: E0923 13:40:13.086155    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 13:40:13 multinode-560300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 13:40:13 multinode-560300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 13:40:13 multinode-560300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 13:40:13 multinode-560300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-560300 -n multinode-560300
E0923 13:40:30.077172    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-560300 -n multinode-560300: (10.3786045s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-560300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (51.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (10800.357s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-491700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-491700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m20.80047s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-491700
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-491700: (31.7988176s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-491700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-491700 status --format={{.Host}}: exit status 7 (2.169408s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-491700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperv
panic: test timed out after 3h0m0s
	running tests:
		TestForceSystemdFlag (2m18s)
		TestKubernetesUpgrade (7m10s)
		TestRunningBinaryUpgrade (12m46s)
		TestStartStop (12m46s)
		TestStoppedBinaryUpgrade (6m36s)
		TestStoppedBinaryUpgrade/Upgrade (6m35s)

                                                
                                                
goroutine 2687 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 6 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc000640ea0, 0xc00096fbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x104
testing.runTests(0xc00050e0f0, {0x4c6ffe0, 0x2a, 0x2a}, {0xffffffffffffffff?, 0xb5d3d9?, 0x4c93060?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000593720)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000593720)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:129 +0xa8

                                                
                                                
goroutine 13 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00016c180)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 169 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 168
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1145 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000640820)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000640820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc000640820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0xf8
testing.tRunner(0xc000640820, 0x356d900)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 1407 [chan receive, 134 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000574ec0, 0xc0000563f0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1383
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1143 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0000f04e0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0000f04e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc0000f04e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x87
testing.tRunner(0xc0000f04e0, 0x356d8f0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 168 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3889720, 0xc0000563f0}, 0xc00094bf50, 0xc00094bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3889720, 0xc0000563f0}, 0xf0?, 0xc00094bf50, 0xc00094bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3889720?, 0xc0000563f0?}, 0x2900380?, 0xc000057110?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc2ee25?, 0xc0001f2300?, 0xc000056af0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 167 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00069c510, 0x3b)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc00228fd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x38a29c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00069c540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000120560, {0x38656c0, 0xc000806150}, 0x1, 0xc0000563f0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000120560, 0x3b9aca00, 0x0, 0x1, 0xc0000563f0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 189 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3880000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 190 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00069c540, 0xc0000563f0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 188
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2702 [syscall]:
syscall.SyscallN(0xc001605a98?, {0xc001605af0?, 0xc001605b20?, 0xab3c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0xb0049d?, 0x16be10c0108?, 0xc0008c3035?, 0xc000681f80?, 0x10?, 0x10?, 0x10001605bc8?, 0x16be6a319a8?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x4a8, {0xc001415aad?, 0x553, 0xb595bf?}, 0xaa131e?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc0015586c8?, {0xc001415aad?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc0015586c8, {0xc001415aad, 0x553, 0x553})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000668048, {0xc001415aad?, 0xaab916?, 0x20c?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00137c090, {0x3864100, 0xc0004c0058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3864280, 0xc00137c090}, {0x3864100, 0xc0004c0058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001605e78?, {0x3864280, 0xc00137c090})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001605f38?, {0x3864280?, 0xc00137c090?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3864280, 0xc00137c090}, {0x38641e0, 0xc000668048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0016c8230?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2590
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2683 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc00139c480, 0xc001366ee0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1146
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2699 [syscall]:
syscall.SyscallN(0xc?, {0xc000955af0?, 0xc000955b20?, 0xab3c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0xb00705?, 0x0?, 0x0?, 0x0?, 0xe4a33a?, 0x2?, 0x10100000010?, 0x16be6c13038?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x494, {0xc00144a303?, 0x4fd, 0xb595bf?}, 0x6?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc00135efc8?, {0xc00144a303?, 0x562?, 0x562?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc00135efc8, {0xc00144a303, 0x4fd, 0x4fd})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004c0328, {0xc00144a303?, 0x60?, 0x23e?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00137c780, {0x3864100, 0xc0000a6530})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3864280, 0xc00137c780}, {0x3864100, 0xc0000a6530}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0001f96d8?, {0x3864280, 0xc00137c780})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001366c40?, {0x3864280?, 0xc00137c780?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3864280, 0xc00137c780}, {0x38641e0, 0xc0004c0328}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0001f9680?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2698
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2576 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00036c820)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00036c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00036c820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00036c820, 0xc000574500)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2571
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2590 [syscall, 4 minutes]:
syscall.SyscallN(0xc00149ba0e?, {0xc00149b9d0?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall(0x10?, 0xc00149ba38?, 0x1000000aa5ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:458 +0x2f
syscall.WaitForSingleObject(0x5e0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc0001f9680?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0001f9680)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0001f9680)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0000f1860, 0xc0001f9680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc0000f1860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x735
testing.tRunner(0xc0000f1860, 0x356d9f8)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2579 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0001a7040)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0001a7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0001a7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc0001a7040, 0x356d9d0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 1254 [IO wait, 154 minutes]:
internal/poll.runtime_pollWait(0x16be69f9b90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xb58015?, 0xb0049d?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0015d6020, 0xc000951b88)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc0015d6008, 0x2c8, {0xc000bec0f0?, 0xc000951be8?, 0xb62a25?}, 0xc000951c1c?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc0015d6008, 0xc000951d68)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc0015d6008)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc000a0c300)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000a0c300)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc0002192c0, {0x387d050, 0xc000a0c300})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc0002192c0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0001a7380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1251
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2704 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001f9680, 0xc001366460)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2590
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2698 [syscall, 6 minutes]:
syscall.SyscallN(0xc001497786?, {0xc001497748?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall(0x10?, 0xc0014977b0?, 0x1000000aa5ac5?, 0x4d?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:458 +0x2f
syscall.WaitForSingleObject(0x68c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc001384a80?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001384a80)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001384a80)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0006411e0, 0xc001384a80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:183 +0x36d
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc001497c50?, {0x38714b8, 0xc00059d600}, 0x356ebf8, {0x0, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x11c
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x38714b8?, 0xc00059d600?}, 0x40?, {0x0?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x56
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/var/lib/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc00185de28, 0x3b9aca00, 0x1a3185c5000, {0xc00185dd38?, 0x26e8ee0?, 0x39abea3?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/pkg/util/retry/retry.go:60 +0xef
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc0006411e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:188 +0x2b1
testing.tRunner(0xc0006411e0, 0xc000574a00)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2591
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2744 [syscall]:
syscall.SyscallN(0x80?, {0xc001a3baf0?, 0xc001a3bb20?, 0xab3c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0xb0049d?, 0x16be10c0108?, 0xc001a3bb41?, 0xc00055ac80?, 0xc001a3bc20?, 0xbec33e?, 0x10000a982c6?, 0x16be6c13038?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x734, {0xc00144ba0e?, 0x5f2, 0x0?}, 0x7d?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc0003d7688?, {0xc00144ba0e?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc0003d7688, {0xc00144ba0e, 0x5f2, 0x5f2})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000526030, {0xc00144ba0e?, 0xb0fe20?, 0x20e?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00137c150, {0x3864100, 0xc000a88058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3864280, 0xc00137c150}, {0x3864100, 0xc000a88058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3864280, 0xc00137c150})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xa9ff36?, {0x3864280?, 0xc00137c150?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3864280, 0xc00137c150}, {0x38641e0, 0xc000526030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0003441c0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2592
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2591 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0014c2680, {0x2bf4026?, 0x3005753e800?}, 0xc000574a00)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc0014c2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:160 +0x2ab
testing.tRunner(0xc0014c2680, 0x356da20)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2746 [select]:
os/exec.(*Cmd).watchCtx(0xc0001f9980, 0xc0013667e0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2592
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2703 [syscall]:
syscall.SyscallN(0xc001951ae0?, {0xc001951af0?, 0x0?, 0xab3c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0xb00705?, 0x16be10c0598?, 0x8000?, 0x1?, 0xc001951bd0?, 0xc001951bd0?, 0x10100a982c6?, 0x16be6a0bdc0?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x4e4, {0xc0013b9c7e?, 0x382, 0xb595bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001558d88?, {0xc0013b9c7e?, 0x1fe6?, 0x1fe6?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001558d88, {0xc0013b9c7e, 0x382, 0x382})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000668070, {0xc0013b9c7e?, 0xaab916?, 0x3e3f?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00137c0c0, {0x3864100, 0xc000526018})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3864280, 0xc00137c0c0}, {0x3864100, 0xc000526018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001951e78?, {0x3864280, 0xc00137c0c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001951f38?, {0x3864280?, 0xc00137c0c0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3864280, 0xc00137c0c0}, {0x38641e0, 0xc000668070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0016c81c0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2590
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 1429 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1428
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1406 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3880000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1383
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1427 [sync.Cond.Wait, 6 minutes]:
sync.runtime_notifyListWait(0xc000574e90, 0x30)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc00135bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x38a29c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000574ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013f4690, {0x38656c0, 0xc000c44690}, 0x1, 0xc0000563f0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013f4690, 0x3b9aca00, 0x0, 0x1, 0xc0000563f0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1407
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 1146 [syscall, 3 minutes]:
syscall.SyscallN(0xc001f39b2e?, {0xc001f39af0?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall(0x10?, 0xc001f39b58?, 0x1000000aa5ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:458 +0x2f
syscall.WaitForSingleObject(0x720, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc00139c480?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00139c480)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc00139c480)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014c2000, 0xc00139c480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc0014c2000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:91 +0x347
testing.tRunner(0xc0014c2000, 0x356d930)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 1147 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0014c21a0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014c21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc0014c21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x87
testing.tRunner(0xc0014c21a0, 0x356d928)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2588 [chan receive, 13 minutes]:
testing.(*T).Run(0xc0001a7ba0, {0x2bf0539?, 0xbeeef3?}, 0x356dc08)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop(0xc0001a7ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0001a7ba0, 0x356da18)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2681 [syscall, 3 minutes]:
syscall.SyscallN(0xc?, {0xc001a07af0?, 0xc001a07b20?, 0xab3c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0xb0049d?, 0x16be10c0598?, 0xc0017d4035?, 0xc0016d7f80?, 0x10?, 0x10?, 0x10001a07bc8?, 0x16be6c13038?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x64c, {0xc00144aa10?, 0x5f0, 0x0?}, 0xc001a07c04?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001559d48?, {0xc00144aa10?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001559d48, {0xc00144aa10, 0x5f0, 0x5f0})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004c0460, {0xc00144aa10?, 0xc001a07d50?, 0x210?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0013f89f0, {0x3864100, 0xc000a881f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3864280, 0xc0013f89f0}, {0x3864100, 0xc000a881f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001a07e78?, {0x3864280, 0xc0013f89f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001a07f38?, {0x3864280?, 0xc0013f89f0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3864280, 0xc0013f89f0}, {0x38641e0, 0xc0004c0460}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0007f4a10?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1146
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2572 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00036c1a0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00036c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00036c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00036c1a0, 0xc000574280)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2571
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 1428 [select, 6 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3889720, 0xc0000563f0}, 0xc00094df50, 0xc00094df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3889720, 0xc0000563f0}, 0x90?, 0xc00094df50, 0xc00094df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3889720?, 0xc0000563f0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00094dfd0?, 0xc2ee84?, 0x20656c694674754f?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1407
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 1431 [chan send, 134 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001f9c80, 0xc001366070)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1430
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 1144 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0000f0680)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0000f0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc0000f0680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0000f0680, 0x356d8e8)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 1568 [chan send, 131 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001f2300, 0xc0016b8ee0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 1342
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2592 [syscall]:
syscall.SyscallN(0xc000be7906?, {0xc000be78c8?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall(0x10?, 0xc000be7930?, 0x1000000aa5ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:458 +0x2f
syscall.WaitForSingleObject(0x440, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc0001f9980?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0001f9980)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0001f9980)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014c2820, 0xc0001f9980)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0014c2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:243 +0xaf5
testing.tRunner(0xc0014c2820, 0x356d998)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2745 [syscall]:
syscall.SyscallN(0x0?, {0xc0017a3af0?, 0xc0017a3b20?, 0xab3c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0xb00705?, 0x16be10c0a28?, 0x4f495441434f4c77?, 0xa30393639313d4e?, 0xc00059c920?, 0xc00059ca80?, 0x101000a8060?, 0x16be68747d8?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x630, {0xc0013b00d7?, 0x1f29, 0xb595bf?}, 0xc0000a6080?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc0003d7b08?, {0xc0013b00d7?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc0003d7b08, {0xc0013b00d7, 0x1f29, 0x1f29})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000526048, {0xc0013b00d7?, 0xc001426480?, 0x1e1c?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00137c180, {0x3864100, 0xc0004c0268})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3864280, 0xc00137c180}, {0x3864100, 0xc0004c0268}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x2bf1840?, {0x3864280, 0xc00137c180})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc?, {0x3864280?, 0xc00137c180?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3864280, 0xc00137c180}, {0x38641e0, 0xc000526048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0x356d9e8?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2592
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2701 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc001384a80, 0xc0007f4d90)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2698
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2682 [syscall, 3 minutes]:
syscall.SyscallN(0x16be6632f18?, {0xc001907af0?, 0xc001907b20?, 0xab3c85?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0xb00705?, 0x0?, 0x0?, 0xc000000000?, 0x10?, 0x10?, 0x10101907bc8?, 0x16be6a0d300?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x510, {0xc001383bc6?, 0x43a, 0xb595bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc0003d7448?, {0xc001383bc6?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc0003d7448, {0xc001383bc6, 0x43a, 0x43a})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004c04a0, {0xc001383bc6?, 0xc001907d50?, 0x1000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0013f8a20, {0x3864100, 0xc000669040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3864280, 0xc0013f8a20}, {0x3864100, 0xc000669040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001907e78?, {0x3864280, 0xc0013f8a20})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001907f38?, {0x3864280?, 0xc0013f8a20?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3864280, 0xc0013f8a20}, {0x38641e0, 0xc0004c04a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0007f4540?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1146
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2577 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00036c9c0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00036c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00036c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00036c9c0, 0xc000574580)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2571
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2700 [syscall, 6 minutes]:
syscall.SyscallN(0xc001315b70?, {0xc001315af0?, 0x11?, 0x11?})
	/usr/local/go/src/runtime/syscall_windows.go:519 +0x46
syscall.Syscall6(0xb0049d?, 0x0?, 0xc001315ba0?, 0xb0814b?, 0x7?, 0x0?, 0x100000f6100?, 0x16be66e2d08?)
	/usr/local/go/src/runtime/syscall_windows.go:465 +0x5c
syscall.readFile(0x464, {0xc000451800?, 0x200, 0x0?}, 0x8?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc00135f448?, {0xc000451800?, 0x200?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc00135f448, {0xc000451800, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004c03a0, {0xc000451800?, 0x60?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00137c7b0, {0x3864100, 0xc000668fc0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3864280, 0xc00137c7b0}, {0x3864100, 0xc000668fc0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0013844d8?, {0x3864280, 0xc00137c7b0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0007f4850?, {0x3864280?, 0xc00137c7b0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3864280, 0xc00137c7b0}, {0x38641e0, 0xc0004c03a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001384480?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2698
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2573 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00036c340)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00036c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00036c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00036c340, 0xc0005742c0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2571
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2574 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00036c4e0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00036c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00036c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00036c4e0, 0xc000574300)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2571
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2575 [chan receive, 13 minutes]:
testing.(*testContext).waitParallel(0xc0007385f0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00036c680)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00036c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00036c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00036c680, 0xc000574340)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2571
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2571 [chan receive, 13 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc00036c000, 0x356dc08)
	/usr/local/go/src/testing/testing.go:1696 +0x104
created by testing.(*T).Run in goroutine 2588
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (310.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-679300 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-679300 --driver=hyperv: exit status 1 (4m59.7159157s)

                                                
                                                
-- stdout --
	* [NoKubernetes-679300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-679300" primary control-plane node in "NoKubernetes-679300" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-679300 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-679300 -n NoKubernetes-679300
E0923 14:00:30.157728    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-679300 -n NoKubernetes-679300: exit status 6 (11.0905444s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 14:00:33.828505    9484 status.go:448] forwarded endpoint: failed to lookup ip for ""

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-679300" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (310.81s)

                                                
                                    

Test pass (136/200)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.1
4 TestDownloadOnly/v1.20.0/preload-exists 0.05
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.24
9 TestDownloadOnly/v1.20.0/DeleteAll 0.53
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.76
12 TestDownloadOnly/v1.31.1/json-events 9.68
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.21
18 TestDownloadOnly/v1.31.1/DeleteAll 0.85
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.53
21 TestBinaryMirror 6.14
22 TestOffline 370.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.23
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 405.29
29 TestAddons/serial/Volcano 62.76
31 TestAddons/serial/GCPAuth/Namespaces 0.29
34 TestAddons/parallel/Ingress 60.49
35 TestAddons/parallel/InspektorGadget 24.83
36 TestAddons/parallel/MetricsServer 19.6
38 TestAddons/parallel/CSI 78.17
39 TestAddons/parallel/Headlamp 47.63
40 TestAddons/parallel/CloudSpanner 19.23
41 TestAddons/parallel/LocalPath 28.91
42 TestAddons/parallel/NvidiaDevicePlugin 20.49
43 TestAddons/parallel/Yakd 24.77
44 TestAddons/StoppedEnableDisable 49.24
56 TestErrorSpam/start 14.99
57 TestErrorSpam/status 31.62
58 TestErrorSpam/pause 19.6
59 TestErrorSpam/unpause 20.3
60 TestErrorSpam/stop 56.58
63 TestFunctional/serial/CopySyncFile 0.03
64 TestFunctional/serial/StartWithProxy 211.69
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 122.97
67 TestFunctional/serial/KubeContext 0.11
68 TestFunctional/serial/KubectlGetPods 0.22
71 TestFunctional/serial/CacheCmd/cache/add_remote 23.63
72 TestFunctional/serial/CacheCmd/cache/add_local 9.54
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.21
74 TestFunctional/serial/CacheCmd/cache/list 0.22
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 32.08
77 TestFunctional/serial/CacheCmd/cache/delete 0.42
78 TestFunctional/serial/MinikubeKubectlCmd 0.44
82 TestFunctional/serial/LogsCmd 109.64
83 TestFunctional/serial/LogsFileCmd 120.52
86 TestFunctional/parallel/ConfigCmd 1.99
88 TestFunctional/parallel/DryRun 9.89
89 TestFunctional/parallel/InternationalLanguage 4.87
95 TestFunctional/parallel/AddonsCmd 0.55
98 TestFunctional/parallel/SSHCmd 18.58
99 TestFunctional/parallel/CpCmd 49.05
101 TestFunctional/parallel/FileSync 8.18
102 TestFunctional/parallel/CertSync 49.76
108 TestFunctional/parallel/NonActiveRuntimeDisabled 7.95
110 TestFunctional/parallel/License 2
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 10.71
128 TestFunctional/parallel/ProfileCmd/profile_list 10.91
129 TestFunctional/parallel/ProfileCmd/profile_json_output 10.8
130 TestFunctional/parallel/Version/short 0.2
131 TestFunctional/parallel/Version/components 6.67
133 TestFunctional/parallel/UpdateContextCmd/no_changes 2.09
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.12
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.11
141 TestFunctional/parallel/ImageCommands/Setup 2.06
146 TestFunctional/parallel/ImageCommands/ImageRemove 120.51
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 60.1
149 TestFunctional/delete_echo-server_images 0.17
150 TestFunctional/delete_my-image_image 0.07
151 TestFunctional/delete_minikube_cached_images 0.07
155 TestMultiControlPlane/serial/StartCluster 649.85
156 TestMultiControlPlane/serial/DeployApp 57.97
158 TestMultiControlPlane/serial/AddWorkerNode 238.66
159 TestMultiControlPlane/serial/NodeLabels 0.16
160 TestMultiControlPlane/serial/HAppyAfterClusterStart 43.23
161 TestMultiControlPlane/serial/CopyFile 555.38
162 TestMultiControlPlane/serial/StopSecondaryNode 67.35
163 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 33.19
167 TestImageBuild/serial/Setup 175.49
168 TestImageBuild/serial/NormalBuild 9.12
169 TestImageBuild/serial/BuildWithBuildArg 7.82
170 TestImageBuild/serial/BuildWithDockerIgnore 7.23
171 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.33
175 TestJSONOutput/start/Command 207.75
176 TestJSONOutput/start/Audit 0
178 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Command 6.89
182 TestJSONOutput/pause/Audit 0
184 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Command 6.82
188 TestJSONOutput/unpause/Audit 0
190 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/stop/Command 36.52
194 TestJSONOutput/stop/Audit 0
196 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
198 TestErrorJSONOutput 0.78
203 TestMainNoArgs 0.22
204 TestMinikubeProfile 477.37
207 TestMountStart/serial/StartWithMountFirst 138.49
208 TestMountStart/serial/VerifyMountFirst 8.56
209 TestMountStart/serial/StartWithMountSecond 138.07
210 TestMountStart/serial/VerifyMountSecond 8.25
211 TestMountStart/serial/DeleteFirst 24.66
212 TestMountStart/serial/VerifyMountPostDelete 8.28
213 TestMountStart/serial/Stop 27.47
214 TestMountStart/serial/RestartStopped 104.48
215 TestMountStart/serial/VerifyMountPostStop 8.31
218 TestMultiNode/serial/FreshStart2Nodes 391.15
219 TestMultiNode/serial/DeployApp2Nodes 9.02
221 TestMultiNode/serial/AddNode 210.38
222 TestMultiNode/serial/MultiNodeLabels 0.14
223 TestMultiNode/serial/ProfileList 30.78
224 TestMultiNode/serial/CopyFile 310.39
225 TestMultiNode/serial/StopNode 66.96
226 TestMultiNode/serial/StartAfterStop 169.73
232 TestPreload 444.89
233 TestScheduledStopWindows 300.48
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.32
252 TestPause/serial/Start 178.4
254 TestPause/serial/SecondStartNoReconfiguration 349.68
257 TestPause/serial/Pause 7.32
258 TestPause/serial/VerifyStatus 10.89
259 TestPause/serial/Unpause 6.89
260 TestPause/serial/PauseAgain 7.16
261 TestPause/serial/DeletePaused 43.47
262 TestPause/serial/VerifyDeletedResources 24.49
x
+
TestDownloadOnly/v1.20.0/json-events (15.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-668400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-668400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (15.0970573s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 11:08:23.615085    3844 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 11:08:23.665475    3844 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-668400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-668400: exit status 85 (235.6842ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-668400 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC |          |
	|         | -p download-only-668400        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:08:08
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:08:08.599853    8632 out.go:345] Setting OutFile to fd 692 ...
	I0923 11:08:08.649854    8632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:08:08.649854    8632 out.go:358] Setting ErrFile to fd 724...
	I0923 11:08:08.649854    8632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 11:08:08.660855    8632 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0923 11:08:08.669857    8632 out.go:352] Setting JSON to true
	I0923 11:08:08.671859    8632 start.go:129] hostinfo: {"hostname":"minikube5","uptime":485665,"bootTime":1726604023,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:08:08.671859    8632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:08:08.677864    8632 out.go:97] [download-only-668400] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	W0923 11:08:08.678438    8632 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0923 11:08:08.678438    8632 notify.go:220] Checking for updates...
	I0923 11:08:08.680434    8632 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:08:08.682925    8632 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:08:08.685574    8632 out.go:169] MINIKUBE_LOCATION=19690
	I0923 11:08:08.688083    8632 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0923 11:08:08.693027    8632 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 11:08:08.693618    8632 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:08:13.452839    8632 out.go:97] Using the hyperv driver based on user configuration
	I0923 11:08:13.453373    8632 start.go:297] selected driver: hyperv
	I0923 11:08:13.453373    8632 start.go:901] validating driver "hyperv" against <nil>
	I0923 11:08:13.453444    8632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:08:13.496575    8632 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0923 11:08:13.497602    8632 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:08:13.498590    8632 cni.go:84] Creating CNI manager for ""
	I0923 11:08:13.498684    8632 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 11:08:13.498875    8632 start.go:340] cluster config:
	{Name:download-only-668400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-668400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:08:13.499602    8632 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:08:13.504140    8632 out.go:97] Downloading VM boot image ...
	I0923 11:08:13.504363    8632 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 11:08:17.449499    8632 out.go:97] Starting "download-only-668400" primary control-plane node in "download-only-668400" cluster
	I0923 11:08:17.449499    8632 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 11:08:17.493173    8632 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 11:08:17.493312    8632 cache.go:56] Caching tarball of preloaded images
	I0923 11:08:17.493817    8632 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 11:08:17.499783    8632 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 11:08:17.499783    8632 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 11:08:17.568774    8632 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-668400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-668400"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-668400
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (9.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-291700 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-291700 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperv: (9.6845173s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (9.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 11:08:34.882526    3844 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 11:08:34.882526    3844 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-291700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-291700: exit status 85 (210.2822ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-668400 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC |                     |
	|         | -p download-only-668400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC | 23 Sep 24 11:08 UTC |
	| delete  | -p download-only-668400        | download-only-668400 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC | 23 Sep 24 11:08 UTC |
	| start   | -o=json --download-only        | download-only-291700 | minikube5\jenkins | v1.34.0 | 23 Sep 24 11:08 UTC |                     |
	|         | -p download-only-291700        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:08:25
	Running on machine: minikube5
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:08:25.286325    4732 out.go:345] Setting OutFile to fd 784 ...
	I0923 11:08:25.331331    4732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:08:25.331331    4732 out.go:358] Setting ErrFile to fd 824...
	I0923 11:08:25.331331    4732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:08:25.351233    4732 out.go:352] Setting JSON to true
	I0923 11:08:25.354331    4732 start.go:129] hostinfo: {"hostname":"minikube5","uptime":485681,"bootTime":1726604023,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:08:25.354331    4732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:08:25.359712    4732 out.go:97] [download-only-291700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:08:25.359712    4732 notify.go:220] Checking for updates...
	I0923 11:08:25.361540    4732 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:08:25.363946    4732 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:08:25.366474    4732 out.go:169] MINIKUBE_LOCATION=19690
	I0923 11:08:25.368913    4732 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0923 11:08:25.373472    4732 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 11:08:25.374058    4732 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:08:30.102533    4732 out.go:97] Using the hyperv driver based on user configuration
	I0923 11:08:30.102533    4732 start.go:297] selected driver: hyperv
	I0923 11:08:30.103399    4732 start.go:901] validating driver "hyperv" against <nil>
	I0923 11:08:30.103946    4732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:08:30.145679    4732 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0923 11:08:30.145679    4732 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:08:30.146678    4732 cni.go:84] Creating CNI manager for ""
	I0923 11:08:30.146907    4732 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:08:30.146936    4732 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 11:08:30.147064    4732 start.go:340] cluster config:
	{Name:download-only-291700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-291700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:08:30.147064    4732 iso.go:125] acquiring lock: {Name:mkf1230aad788822e88d6c9e6923ac65cad813ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:08:30.150372    4732 out.go:97] Starting "download-only-291700" primary control-plane node in "download-only-291700" cluster
	I0923 11:08:30.150372    4732 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:08:30.195086    4732 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:08:30.195086    4732 cache.go:56] Caching tarball of preloaded images
	I0923 11:08:30.195757    4732 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:08:30.198622    4732 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 11:08:30.198734    4732 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 11:08:30.276117    4732 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-291700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-291700"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-291700
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.53s)

                                                
                                    
x
+
TestBinaryMirror (6.14s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 11:08:37.794828    3844 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-627900 --alsologtostderr --binary-mirror http://127.0.0.1:54606 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-627900 --alsologtostderr --binary-mirror http://127.0.0.1:54606 --driver=hyperv: (5.5494748s)
helpers_test.go:175: Cleaning up "binary-mirror-627900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-627900
--- PASS: TestBinaryMirror (6.14s)

                                                
                                    
x
+
TestOffline (370.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-679300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-679300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (5m25.1733308s)
helpers_test.go:175: Cleaning up "offline-docker-679300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-679300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-679300: (45.1331555s)
--- PASS: TestOffline (370.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-526200
addons_test.go:975: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-526200: exit status 85 (227.8482ms)

                                                
                                                
-- stdout --
	* Profile "addons-526200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-526200"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.23s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-526200
addons_test.go:986: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-526200: exit status 85 (217.1348ms)

                                                
                                                
-- stdout --
	* Profile "addons-526200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-526200"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (405.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-526200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-526200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns: (6m45.294741s)
--- PASS: TestAddons/Setup (405.29s)

                                                
                                    
x
+
TestAddons/serial/Volcano (62.76s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 16.1651ms
addons_test.go:835: volcano-scheduler stabilized in 16.2809ms
addons_test.go:851: volcano-controller stabilized in 22.7811ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-fbftk" [4fe5ab09-c7d4-439b-999a-e0a8209202ac] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0061531s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-gbv69" [a2e7cc37-d971-4ed0-b667-888848147b59] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0068431s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-jm645" [c023e123-36d2-45ff-a3e0-68574bbe3fa9] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.006832s
addons_test.go:870: (dbg) Run:  kubectl --context addons-526200 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-526200 create -f testdata\vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-526200 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [533abe23-f485-48e5-90d8-d77cc53ad616] Pending
helpers_test.go:344: "test-job-nginx-0" [533abe23-f485-48e5-90d8-d77cc53ad616] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [533abe23-f485-48e5-90d8-d77cc53ad616] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 22.0064414s
addons_test.go:906: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable volcano --alsologtostderr -v=1: (22.9551586s)
--- PASS: TestAddons/serial/Volcano (62.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-526200 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-526200 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (60.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-526200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-526200 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-526200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cafa920c-7c0f-479d-90ce-f3994bb135d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cafa920c-7c0f-479d-90ce-f3994bb135d4] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0061369s
I0923 11:25:28.342019    3844 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (8.5786335s)
addons_test.go:284: (dbg) Run:  kubectl --context addons-526200 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 ip
addons_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 ip: (2.1567484s)
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 172.19.158.244
addons_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable ingress-dns --alsologtostderr -v=1: (13.689649s)
addons_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable ingress --alsologtostderr -v=1: (21.2907608s)
--- PASS: TestAddons/parallel/Ingress (60.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (24.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-m46sk" [0f023cad-34e4-46be-aeb0-023928904403] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0058456s
addons_test.go:789: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-526200
addons_test.go:789: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-526200: (18.8235435s)
--- PASS: TestAddons/parallel/InspektorGadget (24.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (19.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 8.2168ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-xhdnw" [c77a3e02-02fe-46db-9ee9-7ccb8e036301] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0062387s
addons_test.go:413: (dbg) Run:  kubectl --context addons-526200 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable metrics-server --alsologtostderr -v=1: (13.4353752s)
--- PASS: TestAddons/parallel/MetricsServer (19.60s)

                                                
                                    
x
+
TestAddons/parallel/CSI (78.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 9.638ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-526200 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-526200 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [14b36cbb-d56b-4411-92b1-3626a9af422c] Pending
helpers_test.go:344: "task-pv-pod" [14b36cbb-d56b-4411-92b1-3626a9af422c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [14b36cbb-d56b-4411-92b1-3626a9af422c] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0074923s
addons_test.go:528: (dbg) Run:  kubectl --context addons-526200 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-526200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-526200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-526200 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-526200 delete pod task-pv-pod: (1.6971065s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-526200 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-526200 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-526200 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [31a72833-17df-4163-b3f0-c5173ef978d4] Pending
helpers_test.go:344: "task-pv-pod-restore" [31a72833-17df-4163-b3f0-c5173ef978d4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [31a72833-17df-4163-b3f0-c5173ef978d4] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0065009s
addons_test.go:570: (dbg) Run:  kubectl --context addons-526200 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-526200 delete pod task-pv-pod-restore: (1.5863443s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-526200 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-526200 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable csi-hostpath-driver --alsologtostderr -v=1: (19.6985739s)
addons_test.go:586: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable volumesnapshots --alsologtostderr -v=1: (14.4267167s)
--- PASS: TestAddons/parallel/CSI (78.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (47.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-526200 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-526200 --alsologtostderr -v=1: (13.8861008s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-pnbjk" [1a0fe8b8-cab1-4be9-8a17-b3cfa19d4949] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-pnbjk" [1a0fe8b8-cab1-4be9-8a17-b3cfa19d4949] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.0053082s
addons_test.go:777: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable headlamp --alsologtostderr -v=1: (17.7331187s)
--- PASS: TestAddons/parallel/Headlamp (47.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (19.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-lxhrl" [4ccb3779-45aa-44dd-a2ce-1d1348f8023e] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0056411s
addons_test.go:808: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-526200
addons_test.go:808: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-526200: (14.2176453s)
--- PASS: TestAddons/parallel/CloudSpanner (19.23s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (28.91s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-526200 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-526200 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526200 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e679d7fa-56eb-45c8-bdc3-1fb768b88c90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e679d7fa-56eb-45c8-bdc3-1fb768b88c90] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e679d7fa-56eb-45c8-bdc3-1fb768b88c90] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0049702s
addons_test.go:938: (dbg) Run:  kubectl --context addons-526200 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 ssh "cat /opt/local-path-provisioner/pvc-ec44c691-0529-4f83-b313-a77082d0c7d8_default_test-pvc/file1"
addons_test.go:947: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 ssh "cat /opt/local-path-provisioner/pvc-ec44c691-0529-4f83-b313-a77082d0c7d8_default_test-pvc/file1": (9.1375109s)
addons_test.go:959: (dbg) Run:  kubectl --context addons-526200 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-526200 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (7.4484142s)
--- PASS: TestAddons/parallel/LocalPath (28.91s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dkcn2" [750dbf44-39a9-49fa-b3fd-2d026fcd91aa] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0072305s
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-526200
addons_test.go:1002: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-526200: (14.4762026s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (24.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pfxs9" [400485b0-617c-46e3-8411-87da5bd4eb1e] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0049997s
addons_test.go:1014: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-526200 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-windows-amd64.exe -p addons-526200 addons disable yakd --alsologtostderr -v=1: (18.764048s)
--- PASS: TestAddons/parallel/Yakd (24.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (49.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-526200
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-526200: (38.0910653s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-526200
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-526200: (4.537864s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-526200
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-526200: (4.0939451s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-526200
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-526200: (2.5155699s)
--- PASS: TestAddons/StoppedEnableDisable (49.24s)

                                                
                                    
x
+
TestErrorSpam/start (14.99s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 start --dry-run: (4.9364946s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 start --dry-run: (5.0250514s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 start --dry-run: (5.0239623s)
--- PASS: TestErrorSpam/start (14.99s)

                                                
                                    
x
+
TestErrorSpam/status (31.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 status: (10.8670888s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 status: (10.3990683s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 status: (10.3541281s)
--- PASS: TestErrorSpam/status (31.62s)

                                                
                                    
x
+
TestErrorSpam/pause (19.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 pause: (6.7058651s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 pause: (6.3864479s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 pause: (6.4998197s)
--- PASS: TestErrorSpam/pause (19.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (20.3s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 unpause: (6.9228598s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 unpause: (6.7433599s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 unpause
E0923 11:33:13.437549    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 unpause: (6.6337546s)
--- PASS: TestErrorSpam/unpause (20.30s)

                                                
                                    
x
+
TestErrorSpam/stop (56.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 stop: (36.9035509s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 stop: (10.025405s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-191100 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-191100 stop: (9.6534748s)
--- PASS: TestErrorSpam/stop (56.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\3844\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (211.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-877700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0923 11:35:29.571522    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:35:57.292031    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-877700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m31.6782407s)
--- PASS: TestFunctional/serial/StartWithProxy (211.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (122.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 11:38:02.113057    3844 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-877700 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-877700 --alsologtostderr -v=8: (2m2.9649663s)
functional_test.go:663: soft start took 2m2.9665881s for "functional-877700" cluster.
I0923 11:40:05.087611    3844 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (122.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-877700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (23.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 cache add registry.k8s.io/pause:3.1: (8.0704684s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 cache add registry.k8s.io/pause:3.3: (7.828196s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 cache add registry.k8s.io/pause:latest: (7.7346192s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (23.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-877700 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1346795488\001
E0923 11:40:29.590763    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-877700 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1346795488\001: (1.8594021s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cache add minikube-local-cache-test:functional-877700
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 cache add minikube-local-cache-test:functional-877700: (7.3724914s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cache delete minikube-local-cache-test:functional-877700
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-877700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh sudo crictl images
functional_test.go:1124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh sudo crictl images: (8.2230553s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (32.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.3097646s)
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.2714114s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 cache reload: (7.2294971s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.2665506s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (32.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.42s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 kubectl -- --context functional-877700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (109.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs
E0923 11:50:29.631104    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs: (1m49.638136s)
--- PASS: TestFunctional/serial/LogsCmd (109.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (120.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd861181951\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd861181951\001\logs.txt: (2m0.5176751s)
--- PASS: TestFunctional/serial/LogsFileCmd (120.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 config get cpus: exit status 14 (343.3871ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 config get cpus: exit status 14 (239.4561ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (9.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-877700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-877700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 23 (4.9567213s)

                                                
                                                
-- stdout --
	* [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:56:26.724967    8152 out.go:345] Setting OutFile to fd 1440 ...
	I0923 11:56:26.769996    8152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:26.769996    8152 out.go:358] Setting ErrFile to fd 1396...
	I0923 11:56:26.769996    8152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:26.787399    8152 out.go:352] Setting JSON to false
	I0923 11:56:26.789405    8152 start.go:129] hostinfo: {"hostname":"minikube5","uptime":488563,"bootTime":1726604023,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:56:26.789405    8152 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:56:26.795997    8152 out.go:177] * [functional-877700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:56:26.799150    8152 notify.go:220] Checking for updates...
	I0923 11:56:26.799150    8152 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:56:26.801156    8152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:56:26.804140    8152 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:56:26.806146    8152 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:56:26.808144    8152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:56:26.811648    8152 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:56:26.812492    8152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:56:31.504678    8152 out.go:177] * Using the hyperv driver based on existing profile
	I0923 11:56:31.507161    8152 start.go:297] selected driver: hyperv
	I0923 11:56:31.507161    8152 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:31.507161    8152 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:56:31.549711    8152 out.go:201] 
	W0923 11:56:31.552124    8152 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 11:56:31.553944    8152 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-877700 --dry-run --alsologtostderr -v=1 --driver=hyperv
functional_test.go:991: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-877700 --dry-run --alsologtostderr -v=1 --driver=hyperv: (4.9335443s)
--- PASS: TestFunctional/parallel/DryRun (9.89s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (4.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-877700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-877700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 23 (4.8735044s)

                                                
                                                
-- stdout --
	* [functional-877700] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote hyperv basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:56:21.855305   10884 out.go:345] Setting OutFile to fd 1436 ...
	I0923 11:56:21.901303   10884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:21.901303   10884 out.go:358] Setting ErrFile to fd 1440...
	I0923 11:56:21.901303   10884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:21.919311   10884 out.go:352] Setting JSON to false
	I0923 11:56:21.921308   10884 start.go:129] hostinfo: {"hostname":"minikube5","uptime":488558,"bootTime":1726604023,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0923 11:56:21.921308   10884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:56:21.925298   10884 out.go:177] * [functional-877700] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:56:21.930017   10884 notify.go:220] Checking for updates...
	I0923 11:56:21.930017   10884 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0923 11:56:21.936838   10884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:56:21.943908   10884 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0923 11:56:21.949890   10884 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:56:21.956889   10884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:56:21.960873   10884 config.go:182] Loaded profile config "functional-877700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:56:21.961885   10884 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:56:26.547069   10884 out.go:177] * Utilisation du pilote hyperv basé sur le profil existant
	I0923 11:56:26.551779   10884 start.go:297] selected driver: hyperv
	I0923 11:56:26.551779   10884 start.go:901] validating driver "hyperv" against &{Name:functional-877700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-877700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.157.210 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:26.551779   10884 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:56:26.594828   10884 out.go:201] 
	W0923 11:56:26.597085   10884 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 11:56:26.599264   10884 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (4.87s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (18.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "echo hello"
functional_test.go:1725: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "echo hello": (9.9488632s)
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "cat /etc/hostname": (8.6320475s)
--- PASS: TestFunctional/parallel/SSHCmd (18.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (49.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.9595979s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh -n functional-877700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh -n functional-877700 "sudo cat /home/docker/cp-test.txt": (8.8033439s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cp functional-877700:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2601736257\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 cp functional-877700:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2601736257\001\cp-test.txt: (8.5346s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh -n functional-877700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh -n functional-877700 "sudo cat /home/docker/cp-test.txt": (8.6839554s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (6.6499872s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh -n functional-877700 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh -n functional-877700 "sudo cat /tmp/does/not/exist/cp-test.txt": (8.4112059s)
--- PASS: TestFunctional/parallel/CpCmd (49.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (8.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/3844/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/test/nested/copy/3844/hosts"
functional_test.go:1931: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/test/nested/copy/3844/hosts": (8.1799223s)
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (8.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (49.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/3844.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/ssl/certs/3844.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/ssl/certs/3844.pem": (7.9691694s)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/3844.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /usr/share/ca-certificates/3844.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /usr/share/ca-certificates/3844.pem": (7.8945044s)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/ssl/certs/51391683.0": (8.2419433s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/38442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/ssl/certs/38442.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/ssl/certs/38442.pem": (8.622736s)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/38442.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /usr/share/ca-certificates/38442.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /usr/share/ca-certificates/38442.pem": (8.7631163s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (8.2691171s)
--- PASS: TestFunctional/parallel/CertSync (49.76s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (7.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-877700 ssh "sudo systemctl is-active crio": exit status 1 (7.9492459s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (7.95s)

                                                
                                    
x
+
TestFunctional/parallel/License (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (1.989917s)
--- PASS: TestFunctional/parallel/License (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-877700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7276: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 3896: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.4414356s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (10.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (10.731338s)
functional_test.go:1315: Took "10.7323494s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "181.0276ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (10.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (10.6277388s)
functional_test.go:1366: Took "10.6277388s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "171.1974ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 version -o=json --components: (6.6682115s)
--- PASS: TestFunctional/parallel/Version/components (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 update-context --alsologtostderr -v=2: (2.0864504s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 update-context --alsologtostderr -v=2: (2.1215713s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 update-context --alsologtostderr -v=2
E0923 12:03:32.768226    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 update-context --alsologtostderr -v=2: (2.1041843s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.9628735s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-877700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (120.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image rm kicbase/echo-server:functional-877700 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image rm kicbase/echo-server:functional-877700 --alsologtostderr: (1m0.1995926s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image ls: (1m0.3138641s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (120.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-877700
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-877700 image save --daemon kicbase/echo-server:functional-877700 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-877700 image save --daemon kicbase/echo-server:functional-877700 --alsologtostderr: (59.9234027s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-877700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (60.10s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-877700
--- PASS: TestFunctional/delete_echo-server_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-877700
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-877700
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (649.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-565300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0923 12:13:16.786500    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:16.794232    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:16.806009    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:16.828108    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:16.870061    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:16.952521    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:17.114460    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:17.435984    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:18.077950    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:19.361140    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:21.923476    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:27.046854    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:37.290306    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:13:57.773645    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:14:38.739211    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:15:29.732113    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:16:00.667225    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:18:16.806105    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:18:44.520845    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:20:12.837766    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:20:29.752566    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-565300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m16.4468192s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: (33.4015464s)
--- PASS: TestMultiControlPlane/serial/StartCluster (649.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (57.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-565300 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: (1.8769944s)
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-565300 -- rollout status deployment/busybox: (13.426294s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0923 12:22:38.481053    3844 retry.go:31] will retry after 1.486587322s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0923 12:22:40.436759    3844 retry.go:31] will retry after 1.171861797s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0923 12:22:41.981706    3844 retry.go:31] will retry after 2.034808638s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0923 12:22:44.406196    3844 retry.go:31] will retry after 2.18582399s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0923 12:22:47.577679    3844 retry.go:31] will retry after 5.16779168s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0923 12:22:53.115270    3844 retry.go:31] will retry after 9.663935091s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I0923 12:23:03.152379    3844 retry.go:31] will retry after 9.283312699s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.2.3 10.244.2.2 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-45cpz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-45cpz -- nslookup kubernetes.io: (2.3133835s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-rjg7r -- nslookup kubernetes.io
E0923 12:23:16.827485    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-rjg7r -- nslookup kubernetes.io: (1.6465593s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-x4chx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-45cpz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-rjg7r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-x4chx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-45cpz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-rjg7r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-565300 -- exec busybox-7dff88458-x4chx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (57.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (238.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-565300 -v=7 --alsologtostderr
E0923 12:25:29.772402    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-565300 -v=7 --alsologtostderr: (3m15.4488166s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
E0923 12:28:16.846591    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: (43.2112624s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (238.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-565300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (43.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (43.2312234s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (43.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (555.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status --output json -v=7 --alsologtostderr
E0923 12:29:39.927570    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 status --output json -v=7 --alsologtostderr: (42.4938206s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp testdata\cp-test.txt ha-565300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp testdata\cp-test.txt ha-565300:/home/docker/cp-test.txt: (8.4073752s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt": (8.4820672s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300.txt: (8.5940796s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt": (8.5283431s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300:/home/docker/cp-test.txt ha-565300-m02:/home/docker/cp-test_ha-565300_ha-565300-m02.txt
E0923 12:30:29.792772    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300:/home/docker/cp-test.txt ha-565300-m02:/home/docker/cp-test_ha-565300_ha-565300-m02.txt: (14.7566772s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt": (8.4075234s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test_ha-565300_ha-565300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test_ha-565300_ha-565300-m02.txt": (8.4144325s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300:/home/docker/cp-test.txt ha-565300-m03:/home/docker/cp-test_ha-565300_ha-565300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300:/home/docker/cp-test.txt ha-565300-m03:/home/docker/cp-test_ha-565300_ha-565300-m03.txt: (14.557626s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt": (8.3604366s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test_ha-565300_ha-565300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test_ha-565300_ha-565300-m03.txt": (8.3397256s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300:/home/docker/cp-test.txt ha-565300-m04:/home/docker/cp-test_ha-565300_ha-565300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300:/home/docker/cp-test.txt ha-565300-m04:/home/docker/cp-test_ha-565300_ha-565300-m04.txt: (14.6395989s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test.txt": (8.4094816s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test_ha-565300_ha-565300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test_ha-565300_ha-565300-m04.txt": (8.3475883s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp testdata\cp-test.txt ha-565300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp testdata\cp-test.txt ha-565300-m02:/home/docker/cp-test.txt: (8.3020895s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt": (8.2550431s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300-m02.txt: (8.309369s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt": (8.3668528s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m02:/home/docker/cp-test.txt ha-565300:/home/docker/cp-test_ha-565300-m02_ha-565300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m02:/home/docker/cp-test.txt ha-565300:/home/docker/cp-test_ha-565300-m02_ha-565300.txt: (14.504138s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt": (8.353861s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test_ha-565300-m02_ha-565300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test_ha-565300-m02_ha-565300.txt": (8.4577932s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m02:/home/docker/cp-test.txt ha-565300-m03:/home/docker/cp-test_ha-565300-m02_ha-565300-m03.txt
E0923 12:33:16.867842    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m02:/home/docker/cp-test.txt ha-565300-m03:/home/docker/cp-test_ha-565300-m02_ha-565300-m03.txt: (14.7532679s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt": (8.4340537s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test_ha-565300-m02_ha-565300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test_ha-565300-m02_ha-565300-m03.txt": (8.4469092s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m02:/home/docker/cp-test.txt ha-565300-m04:/home/docker/cp-test_ha-565300-m02_ha-565300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m02:/home/docker/cp-test.txt ha-565300-m04:/home/docker/cp-test_ha-565300-m02_ha-565300-m04.txt: (14.7151678s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test.txt": (8.5190768s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test_ha-565300-m02_ha-565300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test_ha-565300-m02_ha-565300-m04.txt": (8.5269053s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp testdata\cp-test.txt ha-565300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp testdata\cp-test.txt ha-565300-m03:/home/docker/cp-test.txt: (8.4991461s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt": (8.4432634s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300-m03.txt: (8.3339399s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt": (8.310542s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt ha-565300:/home/docker/cp-test_ha-565300-m03_ha-565300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt ha-565300:/home/docker/cp-test_ha-565300-m03_ha-565300.txt: (14.6524489s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt": (8.5507932s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test_ha-565300-m03_ha-565300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test_ha-565300-m03_ha-565300.txt": (8.597221s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt ha-565300-m02:/home/docker/cp-test_ha-565300-m03_ha-565300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt ha-565300-m02:/home/docker/cp-test_ha-565300-m03_ha-565300-m02.txt: (14.898458s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt"
E0923 12:35:29.813018    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt": (8.5746845s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test_ha-565300-m03_ha-565300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test_ha-565300-m03_ha-565300-m02.txt": (8.3940658s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt ha-565300-m04:/home/docker/cp-test_ha-565300-m03_ha-565300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m03:/home/docker/cp-test.txt ha-565300-m04:/home/docker/cp-test_ha-565300-m03_ha-565300-m04.txt: (14.7122752s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test.txt": (8.3767363s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test_ha-565300-m03_ha-565300-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test_ha-565300-m03_ha-565300-m04.txt": (8.3854374s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp testdata\cp-test.txt ha-565300-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp testdata\cp-test.txt ha-565300-m04:/home/docker/cp-test.txt: (8.3304367s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt": (8.3453504s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile4037996978\001\cp-test_ha-565300-m04.txt: (8.3301948s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt": (8.3260098s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt ha-565300:/home/docker/cp-test_ha-565300-m04_ha-565300.txt
E0923 12:36:52.908048    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt ha-565300:/home/docker/cp-test_ha-565300-m04_ha-565300.txt: (14.6656965s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt": (8.4792765s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test_ha-565300-m04_ha-565300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300 "sudo cat /home/docker/cp-test_ha-565300-m04_ha-565300.txt": (8.4116498s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt ha-565300-m02:/home/docker/cp-test_ha-565300-m04_ha-565300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt ha-565300-m02:/home/docker/cp-test_ha-565300-m04_ha-565300-m02.txt: (14.7201611s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt": (8.4043558s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test_ha-565300-m04_ha-565300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m02 "sudo cat /home/docker/cp-test_ha-565300-m04_ha-565300-m02.txt": (8.3539037s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt ha-565300-m03:/home/docker/cp-test_ha-565300-m04_ha-565300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 cp ha-565300-m04:/home/docker/cp-test.txt ha-565300-m03:/home/docker/cp-test_ha-565300-m04_ha-565300-m03.txt: (14.7424766s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m04 "sudo cat /home/docker/cp-test.txt": (8.3751195s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test_ha-565300-m04_ha-565300-m03.txt"
E0923 12:38:16.887975    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 ssh -n ha-565300-m03 "sudo cat /home/docker/cp-test_ha-565300-m04_ha-565300-m03.txt": (8.4514164s)
--- PASS: TestMultiControlPlane/serial/CopyFile (555.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (67.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-565300 node stop m02 -v=7 --alsologtostderr: (33.7358409s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-565300 status -v=7 --alsologtostderr: exit status 7 (33.6126141s)

                                                
                                                
-- stdout --
	ha-565300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565300-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-565300-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565300-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:38:56.850995    4336 out.go:345] Setting OutFile to fd 752 ...
	I0923 12:38:56.908669    4336 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:38:56.908669    4336 out.go:358] Setting ErrFile to fd 1704...
	I0923 12:38:56.908669    4336 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:38:56.920362    4336 out.go:352] Setting JSON to false
	I0923 12:38:56.920362    4336 mustload.go:65] Loading cluster: ha-565300
	I0923 12:38:56.920996    4336 notify.go:220] Checking for updates...
	I0923 12:38:56.921814    4336 config.go:182] Loaded profile config "ha-565300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:38:56.921888    4336 status.go:174] checking status of ha-565300 ...
	I0923 12:38:57.075576    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:38:59.069493    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:38:59.069493    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:38:59.069493    4336 status.go:364] ha-565300 host status = "Running" (err=<nil>)
	I0923 12:38:59.069493    4336 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:38:59.070111    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:39:01.054424    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:39:01.054424    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:01.054424    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:39:03.393228    4336 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:39:03.393380    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:03.393380    4336 host.go:66] Checking if "ha-565300" exists ...
	I0923 12:39:03.401776    4336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:39:03.401776    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300 ).state
	I0923 12:39:05.298450    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:39:05.298450    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:05.299519    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300 ).networkadapters[0]).ipaddresses[0]
	I0923 12:39:07.645526    4336 main.go:141] libmachine: [stdout =====>] : 172.19.146.194
	
	I0923 12:39:07.645526    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:07.646005    4336 sshutil.go:53] new ssh client: &{IP:172.19.146.194 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300\id_rsa Username:docker}
	I0923 12:39:07.749651    4336 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3475819s)
	I0923 12:39:07.759188    4336 ssh_runner.go:195] Run: systemctl --version
	I0923 12:39:07.777122    4336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:39:07.801397    4336 kubeconfig.go:125] found "ha-565300" server: "https://172.19.159.254:8443"
	I0923 12:39:07.801475    4336 api_server.go:166] Checking apiserver status ...
	I0923 12:39:07.810955    4336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:39:07.847775    4336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2084/cgroup
	W0923 12:39:07.868894    4336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2084/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:39:07.877354    4336 ssh_runner.go:195] Run: ls
	I0923 12:39:07.884410    4336 api_server.go:253] Checking apiserver healthz at https://172.19.159.254:8443/healthz ...
	I0923 12:39:07.894375    4336 api_server.go:279] https://172.19.159.254:8443/healthz returned 200:
	ok
	I0923 12:39:07.894465    4336 status.go:456] ha-565300 apiserver status = Running (err=<nil>)
	I0923 12:39:07.894497    4336 status.go:176] ha-565300 status: &{Name:ha-565300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:39:07.894497    4336 status.go:174] checking status of ha-565300-m02 ...
	I0923 12:39:07.895396    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m02 ).state
	I0923 12:39:09.749203    4336 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 12:39:09.750275    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:09.750275    4336 status.go:364] ha-565300-m02 host status = "Stopped" (err=<nil>)
	I0923 12:39:09.750275    4336 status.go:377] host is not running, skipping remaining checks
	I0923 12:39:09.750275    4336 status.go:176] ha-565300-m02 status: &{Name:ha-565300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:39:09.750275    4336 status.go:174] checking status of ha-565300-m03 ...
	I0923 12:39:09.750848    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:39:11.639780    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:39:11.639780    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:11.639780    4336 status.go:364] ha-565300-m03 host status = "Running" (err=<nil>)
	I0923 12:39:11.639780    4336 host.go:66] Checking if "ha-565300-m03" exists ...
	I0923 12:39:11.640389    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:39:13.564918    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:39:13.565115    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:13.565115    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:39:15.835565    4336 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:39:15.835565    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:15.835808    4336 host.go:66] Checking if "ha-565300-m03" exists ...
	I0923 12:39:15.847877    4336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:39:15.847877    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m03 ).state
	I0923 12:39:17.674897    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:39:17.675811    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:17.675900    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m03 ).networkadapters[0]).ipaddresses[0]
	I0923 12:39:19.928449    4336 main.go:141] libmachine: [stdout =====>] : 172.19.153.80
	
	I0923 12:39:19.929032    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:19.929409    4336 sshutil.go:53] new ssh client: &{IP:172.19.153.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m03\id_rsa Username:docker}
	I0923 12:39:20.044991    4336 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.1967457s)
	I0923 12:39:20.056408    4336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:39:20.081814    4336 kubeconfig.go:125] found "ha-565300" server: "https://172.19.159.254:8443"
	I0923 12:39:20.081814    4336 api_server.go:166] Checking apiserver status ...
	I0923 12:39:20.092707    4336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:39:20.125128    4336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2194/cgroup
	W0923 12:39:20.141674    4336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2194/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:39:20.149838    4336 ssh_runner.go:195] Run: ls
	I0923 12:39:20.155570    4336 api_server.go:253] Checking apiserver healthz at https://172.19.159.254:8443/healthz ...
	I0923 12:39:20.162771    4336 api_server.go:279] https://172.19.159.254:8443/healthz returned 200:
	ok
	I0923 12:39:20.162839    4336 status.go:456] ha-565300-m03 apiserver status = Running (err=<nil>)
	I0923 12:39:20.162839    4336 status.go:176] ha-565300-m03 status: &{Name:ha-565300-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:39:20.162839    4336 status.go:174] checking status of ha-565300-m04 ...
	I0923 12:39:20.163367    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m04 ).state
	I0923 12:39:21.987317    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:39:21.987444    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:21.987444    4336 status.go:364] ha-565300-m04 host status = "Running" (err=<nil>)
	I0923 12:39:21.987444    4336 host.go:66] Checking if "ha-565300-m04" exists ...
	I0923 12:39:21.988524    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m04 ).state
	I0923 12:39:23.878447    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:39:23.878447    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:23.878526    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m04 ).networkadapters[0]).ipaddresses[0]
	I0923 12:39:26.107107    4336 main.go:141] libmachine: [stdout =====>] : 172.19.147.53
	
	I0923 12:39:26.108168    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:26.108216    4336 host.go:66] Checking if "ha-565300-m04" exists ...
	I0923 12:39:26.116154    4336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:39:26.117121    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-565300-m04 ).state
	I0923 12:39:27.965424    4336 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 12:39:27.966312    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:27.966312    4336 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-565300-m04 ).networkadapters[0]).ipaddresses[0]
	I0923 12:39:30.206480    4336 main.go:141] libmachine: [stdout =====>] : 172.19.147.53
	
	I0923 12:39:30.206480    4336 main.go:141] libmachine: [stderr =====>] : 
	I0923 12:39:30.207354    4336 sshutil.go:53] new ssh client: &{IP:172.19.147.53 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-565300-m04\id_rsa Username:docker}
	I0923 12:39:30.300021    4336 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.1835851s)
	I0923 12:39:30.311493    4336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:39:30.332868    4336 status.go:176] ha-565300-m04 status: &{Name:ha-565300-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (67.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (33.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (33.1869449s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (33.19s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (175.49s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-572000 --driver=hyperv
E0923 12:45:29.854521    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:46:19.997428    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-572000 --driver=hyperv: (2m55.4931822s)
--- PASS: TestImageBuild/serial/Setup (175.49s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-572000
E0923 12:48:16.927846    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-572000: (9.1154659s)
--- PASS: TestImageBuild/serial/NormalBuild (9.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (7.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-572000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-572000: (7.821225s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (7.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-572000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-572000: (7.2318423s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.23s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-572000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-572000: (7.3255115s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.33s)

                                                
                                    
x
+
TestJSONOutput/start/Command (207.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-148700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0923 12:50:29.874348    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-148700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m27.7442686s)
--- PASS: TestJSONOutput/start/Command (207.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (6.89s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-148700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-148700 --output=json --user=testUser: (6.8855525s)
--- PASS: TestJSONOutput/pause/Command (6.89s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (6.82s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-148700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-148700 --output=json --user=testUser: (6.8152325s)
--- PASS: TestJSONOutput/unpause/Command (6.82s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (36.52s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-148700 --output=json --user=testUser
E0923 12:53:16.948073    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:53:32.979000    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-148700 --output=json --user=testUser: (36.5177949s)
--- PASS: TestJSONOutput/stop/Command (36.52s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.78s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-155700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-155700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (206.7385ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"002164f2-56c9-4ae1-b12a-e0d74c6c67e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-155700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a84929a-9474-410a-b093-2ecbc972afac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"86705dc9-6825-407c-98cb-7d91766eb351","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"65c24901-4a18-4388-8b92-6585548d224d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"c651a39f-35b8-4696-83b8-a11f2657a2a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"dd23b7eb-af8f-475c-b015-6f3f0abb82e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"17cc418b-9f84-4f41-8b36-c8beef955cde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-155700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-155700
--- PASS: TestErrorJSONOutput (0.78s)

                                                
                                    
x
+
TestMainNoArgs (0.22s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.22s)

                                                
                                    
x
+
TestMinikubeProfile (477.37s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-649400 --driver=hyperv
E0923 12:55:29.894436    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-649400 --driver=hyperv: (2m57.2823716s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-649400 --driver=hyperv
E0923 12:58:16.968823    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-649400 --driver=hyperv: (2m54.9234293s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-649400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.1907705s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-649400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0923 13:00:29.914310    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.471772s)
helpers_test.go:175: Cleaning up "second-649400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-649400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-649400: (43.7895294s)
helpers_test.go:175: Cleaning up "first-649400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-649400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-649400: (38.1707344s)
--- PASS: TestMinikubeProfile (477.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (138.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-313100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0923 13:03:00.067902    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:03:16.988169    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-313100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m17.4841092s)
--- PASS: TestMountStart/serial/StartWithMountFirst (138.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-313100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-313100 ssh -- ls /minikube-host: (8.5547499s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (138.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-313100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0923 13:05:29.935223    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-313100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m17.0690751s)
--- PASS: TestMountStart/serial/StartWithMountSecond (138.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-313100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-313100 ssh -- ls /minikube-host: (8.253442s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (24.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-313100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-313100 --alsologtostderr -v=5: (24.6579936s)
--- PASS: TestMountStart/serial/DeleteFirst (24.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-313100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-313100 ssh -- ls /minikube-host: (8.2772737s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.47s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-313100
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-313100: (27.4662182s)
--- PASS: TestMountStart/serial/Stop (27.47s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (104.48s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-313100
E0923 13:08:17.008591    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-313100: (1m43.4754198s)
--- PASS: TestMountStart/serial/RestartStopped (104.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (8.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-313100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-313100 ssh -- ls /minikube-host: (8.3064239s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (8.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (391.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-560300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0923 13:10:13.048295    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:10:29.955112    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:13:17.030269    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:15:29.976347    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-560300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m10.4995402s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr: (20.6464781s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (391.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- rollout status deployment/busybox: (3.5873446s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-h4tgf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-h4tgf -- nslookup kubernetes.io: (1.9231318s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-wwgwh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-h4tgf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-wwgwh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-h4tgf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-560300 -- exec busybox-7dff88458-wwgwh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (210.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-560300 -v 3 --alsologtostderr
E0923 13:18:17.049319    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:19:40.137649    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:20:29.996546    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-560300 -v 3 --alsologtostderr: (2m59.8180373s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr: (30.5622635s)
--- PASS: TestMultiNode/serial/AddNode (210.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-560300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (30.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (30.7746518s)
--- PASS: TestMultiNode/serial/ProfileList (30.78s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (310.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 status --output json --alsologtostderr: (30.5319955s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp testdata\cp-test.txt multinode-560300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp testdata\cp-test.txt multinode-560300:/home/docker/cp-test.txt: (8.0522826s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test.txt": (8.0795682s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300.txt: (8.1137033s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test.txt": (8.1279738s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300:/home/docker/cp-test.txt multinode-560300-m02:/home/docker/cp-test_multinode-560300_multinode-560300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300:/home/docker/cp-test.txt multinode-560300-m02:/home/docker/cp-test_multinode-560300_multinode-560300-m02.txt: (14.1380735s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test.txt": (8.0686226s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test_multinode-560300_multinode-560300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test_multinode-560300_multinode-560300-m02.txt": (8.1274855s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300:/home/docker/cp-test.txt multinode-560300-m03:/home/docker/cp-test_multinode-560300_multinode-560300-m03.txt
E0923 13:23:17.069949    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300:/home/docker/cp-test.txt multinode-560300-m03:/home/docker/cp-test_multinode-560300_multinode-560300-m03.txt: (14.053107s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test.txt": (8.111834s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test_multinode-560300_multinode-560300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test_multinode-560300_multinode-560300-m03.txt": (8.1499361s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp testdata\cp-test.txt multinode-560300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp testdata\cp-test.txt multinode-560300-m02:/home/docker/cp-test.txt: (8.091465s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test.txt": (8.1501815s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300-m02.txt: (8.1106816s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test.txt": (8.0821899s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt multinode-560300:/home/docker/cp-test_multinode-560300-m02_multinode-560300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt multinode-560300:/home/docker/cp-test_multinode-560300-m02_multinode-560300.txt: (14.2139987s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test.txt": (8.1222306s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test_multinode-560300-m02_multinode-560300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test_multinode-560300-m02_multinode-560300.txt": (8.1767128s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt multinode-560300-m03:/home/docker/cp-test_multinode-560300-m02_multinode-560300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m02:/home/docker/cp-test.txt multinode-560300-m03:/home/docker/cp-test_multinode-560300-m02_multinode-560300-m03.txt: (14.2419457s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test.txt": (8.2353084s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test_multinode-560300-m02_multinode-560300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test_multinode-560300-m02_multinode-560300-m03.txt": (8.2011221s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp testdata\cp-test.txt multinode-560300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp testdata\cp-test.txt multinode-560300-m03:/home/docker/cp-test.txt: (8.3500912s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test.txt"
E0923 13:25:30.016261    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test.txt": (8.1775979s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile493158071\001\cp-test_multinode-560300-m03.txt: (8.0770335s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test.txt": (8.085215s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt multinode-560300:/home/docker/cp-test_multinode-560300-m03_multinode-560300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt multinode-560300:/home/docker/cp-test_multinode-560300-m03_multinode-560300.txt: (14.0473061s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test.txt": (8.0504913s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test_multinode-560300-m03_multinode-560300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300 "sudo cat /home/docker/cp-test_multinode-560300-m03_multinode-560300.txt": (8.1031369s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt multinode-560300-m02:/home/docker/cp-test_multinode-560300-m03_multinode-560300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 cp multinode-560300-m03:/home/docker/cp-test.txt multinode-560300-m02:/home/docker/cp-test_multinode-560300-m03_multinode-560300-m02.txt: (14.048724s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m03 "sudo cat /home/docker/cp-test.txt": (8.1414417s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test_multinode-560300-m03_multinode-560300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 ssh -n multinode-560300-m02 "sudo cat /home/docker/cp-test_multinode-560300-m03_multinode-560300-m02.txt": (8.1064908s)
--- PASS: TestMultiNode/serial/CopyFile (310.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (66.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 node stop m03
E0923 13:26:53.118340    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 node stop m03: (22.13843s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-560300 status: exit status 7 (22.3098834s)

                                                
                                                
-- stdout --
	multinode-560300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-560300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-560300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-560300 status --alsologtostderr: exit status 7 (22.5074797s)

                                                
                                                
-- stdout --
	multinode-560300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-560300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-560300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 13:27:37.196241    4476 out.go:345] Setting OutFile to fd 1640 ...
	I0923 13:27:37.252210    4476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:37.252210    4476 out.go:358] Setting ErrFile to fd 1964...
	I0923 13:27:37.252735    4476 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 13:27:37.264655    4476 out.go:352] Setting JSON to false
	I0923 13:27:37.264655    4476 mustload.go:65] Loading cluster: multinode-560300
	I0923 13:27:37.264655    4476 notify.go:220] Checking for updates...
	I0923 13:27:37.265937    4476 config.go:182] Loaded profile config "multinode-560300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 13:27:37.265937    4476 status.go:174] checking status of multinode-560300 ...
	I0923 13:27:37.266736    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:27:39.180502    4476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:27:39.180590    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:39.180590    4476 status.go:364] multinode-560300 host status = "Running" (err=<nil>)
	I0923 13:27:39.180590    4476 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:27:39.181295    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:27:41.092080    4476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:27:41.092080    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:41.092080    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:27:43.429656    4476 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:27:43.429656    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:43.429744    4476 host.go:66] Checking if "multinode-560300" exists ...
	I0923 13:27:43.438267    4476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:27:43.438267    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300 ).state
	I0923 13:27:45.286241    4476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:27:45.286241    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:45.286846    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300 ).networkadapters[0]).ipaddresses[0]
	I0923 13:27:47.499390    4476 main.go:141] libmachine: [stdout =====>] : 172.19.153.215
	
	I0923 13:27:47.499390    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:47.500471    4476 sshutil.go:53] new ssh client: &{IP:172.19.153.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300\id_rsa Username:docker}
	I0923 13:27:47.604580    4476 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.1660312s)
	I0923 13:27:47.617467    4476 ssh_runner.go:195] Run: systemctl --version
	I0923 13:27:47.634653    4476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:27:47.659407    4476 kubeconfig.go:125] found "multinode-560300" server: "https://172.19.153.215:8443"
	I0923 13:27:47.659407    4476 api_server.go:166] Checking apiserver status ...
	I0923 13:27:47.669085    4476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 13:27:47.699044    4476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2046/cgroup
	W0923 13:27:47.716044    4476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2046/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 13:27:47.727037    4476 ssh_runner.go:195] Run: ls
	I0923 13:27:47.733544    4476 api_server.go:253] Checking apiserver healthz at https://172.19.153.215:8443/healthz ...
	I0923 13:27:47.742455    4476 api_server.go:279] https://172.19.153.215:8443/healthz returned 200:
	ok
	I0923 13:27:47.742657    4476 status.go:456] multinode-560300 apiserver status = Running (err=<nil>)
	I0923 13:27:47.742697    4476 status.go:176] multinode-560300 status: &{Name:multinode-560300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:27:47.742697    4476 status.go:174] checking status of multinode-560300-m02 ...
	I0923 13:27:47.743219    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:27:49.614928    4476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:27:49.614928    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:49.615017    4476 status.go:364] multinode-560300-m02 host status = "Running" (err=<nil>)
	I0923 13:27:49.615017    4476 host.go:66] Checking if "multinode-560300-m02" exists ...
	I0923 13:27:49.615100    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:27:51.442729    4476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:27:51.443183    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:51.443183    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:27:53.649379    4476 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:27:53.649666    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:53.649723    4476 host.go:66] Checking if "multinode-560300-m02" exists ...
	I0923 13:27:53.661580    4476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 13:27:53.661580    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m02 ).state
	I0923 13:27:55.460693    4476 main.go:141] libmachine: [stdout =====>] : Running
	
	I0923 13:27:55.461509    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:55.461602    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-560300-m02 ).networkadapters[0]).ipaddresses[0]
	I0923 13:27:57.644063    4476 main.go:141] libmachine: [stdout =====>] : 172.19.147.68
	
	I0923 13:27:57.644063    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:57.644487    4476 sshutil.go:53] new ssh client: &{IP:172.19.147.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-560300-m02\id_rsa Username:docker}
	I0923 13:27:57.735060    4476 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.0732049s)
	I0923 13:27:57.743975    4476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 13:27:57.768757    4476 status.go:176] multinode-560300-m02 status: &{Name:multinode-560300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 13:27:57.768835    4476 status.go:174] checking status of multinode-560300-m03 ...
	I0923 13:27:57.768905    4476 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-560300-m03 ).state
	I0923 13:27:59.585559    4476 main.go:141] libmachine: [stdout =====>] : Off
	
	I0923 13:27:59.586574    4476 main.go:141] libmachine: [stderr =====>] : 
	I0923 13:27:59.586617    4476 status.go:364] multinode-560300-m03 host status = "Stopped" (err=<nil>)
	I0923 13:27:59.586617    4476 status.go:377] host is not running, skipping remaining checks
	I0923 13:27:59.586617    4476 status.go:176] multinode-560300-m03 status: &{Name:multinode-560300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (66.96s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (169.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 node start m03 -v=7 --alsologtostderr
E0923 13:28:17.089545    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 node start m03 -v=7 --alsologtostderr: (2m18.5084658s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-560300 status -v=7 --alsologtostderr
E0923 13:30:30.036626    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-560300 status -v=7 --alsologtostderr: (31.0766147s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (169.73s)

                                                
                                    
x
+
TestPreload (444.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-275300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0923 13:43:17.151310    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:43:33.188478    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:45:30.096374    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-275300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m30.2317141s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-275300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-275300 image pull gcr.io/k8s-minikube/busybox: (7.7905303s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-275300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-275300: (37.0354215s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-275300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0923 13:48:17.170877    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-275300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m24.0059547s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-275300 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-275300 image list: (6.3608518s)
helpers_test.go:175: Cleaning up "test-preload-275300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-275300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-275300: (39.4583043s)
--- PASS: TestPreload (444.89s)

                                                
                                    
x
+
TestScheduledStopWindows (300.48s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-295700 --memory=2048 --driver=hyperv
E0923 13:50:30.117506    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 13:53:00.279237    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-295700 --memory=2048 --driver=hyperv: (2m54.7417428s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-295700 --schedule 5m
E0923 13:53:17.191214    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-877700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-295700 --schedule 5m: (9.199803s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-295700 -n scheduled-stop-295700
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-295700 -n scheduled-stop-295700: exit status 1 (10.0107062s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-295700 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-295700 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.2091326s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-295700 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-295700 --schedule 5s: (9.160405s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-295700
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-295700: exit status 7 (2.0873408s)

                                                
                                                
-- stdout --
	scheduled-stop-295700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-295700 -n scheduled-stop-295700
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-295700 -n scheduled-stop-295700: exit status 7 (2.0922931s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-295700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-295700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-295700: (24.9690007s)
--- PASS: TestScheduledStopWindows (300.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-679300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-679300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (322.4867ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-679300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                    
x
+
TestPause/serial/Start (178.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-679300 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-679300 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (2m58.3961649s)
--- PASS: TestPause/serial/Start (178.40s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (349.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-679300 --alsologtostderr -v=1 --driver=hyperv
E0923 14:00:13.258180    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-679300 --alsologtostderr -v=1 --driver=hyperv: (5m49.6495585s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (349.68s)

                                                
                                    
x
+
TestPause/serial/Pause (7.32s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-679300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-679300 --alsologtostderr -v=5: (7.3244457s)
--- PASS: TestPause/serial/Pause (7.32s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (10.89s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-679300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-679300 --output=json --layout=cluster: exit status 2 (10.8873599s)

                                                
                                                
-- stdout --
	{"Name":"pause-679300","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-679300","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (10.89s)

                                                
                                    
x
+
TestPause/serial/Unpause (6.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-679300 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-679300 --alsologtostderr -v=5: (6.8873827s)
--- PASS: TestPause/serial/Unpause (6.89s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.16s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-679300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-679300 --alsologtostderr -v=5: (7.1604401s)
--- PASS: TestPause/serial/PauseAgain (7.16s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (43.47s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-679300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-679300 --alsologtostderr -v=5: (43.4658973s)
--- PASS: TestPause/serial/DeletePaused (43.47s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (24.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0923 14:05:30.177681    3844 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-526200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (24.4940148s)
--- PASS: TestPause/serial/VerifyDeletedResources (24.49s)

                                                
                                    

Test skip (27/200)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-877700 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-877700 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 1564: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (6.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard